B
AI Competition Submission Validator
2.65
Derivation Chain
Step 1
Expansion of university AI competitions
→
Step 2
AI competition management platforms
→
Step 3
Automated submission verification & plagiarism detection service
Problem
When universities and public institutions host AI competitions, judges must manually review each team's submissions (code + model + presentation materials) for plagiarism, executability, and performance reproducibility. For 50 teams, first-round screening requires 3 judges spending 8 hours each — 24 person-hours (approximately $2,700 in labor costs) — and an average of 30% of submissions fail to reproduce, resulting in significant time waste.
Solution
Automatically executes submitted code in a sandbox environment to verify performance metrics reproducibility, detects plagiarism via AST-based code similarity analysis, and delivers results through a judge-facing dashboard. Key differentiators: templates tailored for Korean university competitions (pre-configured for Google Cloud and Naver Cloud environments) and AI-powered summarization of Korean-language presentation materials.
NUMR-V Scores
NUMR-V Scoring System
| N Novelty | 1-5 | How uncommon the service is in market context. |
| U Urgency | 1-5 | How urgently users need this problem solved now. |
| M Market | 1-5 | Market size and growth potential from proxy indicators. |
| R Realizability | 1-5 | Buildability for a small team with realistic constraints. |
| V Validation | 1-5 | Validation signal quality from competition and demand data. |
SaaS N=.15 U=.20 M=.15 R=.30 V=.20
Senior N=.25 U=.25 M=.05 R=.30 V=.15
Feasibility (69%)
Data Availability
19.4/25
Feasibility Breakdown
| Tech Complexity | / 40 | Difficulty of core implementation stack. |
| Data Availability | / 25 | Practical availability and cost of required data. |
| MVP Timeline | / 20 | Expected time to ship a usable MVP. |
| API Bonus | / 15 | Bonus for viable public API leverage. |
Market Validation (51/100)
Validation Breakdown
| Competition | / 20 | Signal quality from competitor landscape. |
| Market Demand | / 20 | Demand proxies from search and mention patterns. |
| Timing | / 20 | Fit with current shifts in tech, behavior, and regulation. |
| Revenue Signals | / 15 | Reference evidence for monetization viability. |
| Pick-Axe Fit | / 15 | How well the concept serves participants in a trend. |
| Solo Buildability | / 10 | Practicality for lean-team implementation. |
Technical Requirements
Backend [medium]
Frontend [low]
AI/ML [medium]