B

AI Competition Submission Validator

2.65

Derivation Chain

Step 1 Expansion of university AI competitions
Step 2 AI competition management platforms
Step 3 Automated submission verification & plagiarism detection service

Problem

When universities and public institutions host AI competitions, judges must manually review each team's submissions (code + model + presentation materials) for plagiarism, executability, and performance reproducibility. For 50 teams, first-round screening requires 3 judges spending 8 hours each — 24 person-hours (approximately $2,700 in labor costs) — and an average of 30% of submissions fail to reproduce, resulting in significant time waste.

Solution

Automatically executes submitted code in a sandbox environment to verify performance metrics reproducibility, detects plagiarism via AST-based code similarity analysis, and delivers results through a judge-facing dashboard. Key differentiators: templates tailored for Korean university competitions (pre-configured for Google Cloud and Naver Cloud environments) and AI-powered summarization of Korean-language presentation materials.

Target: Industry-academia cooperation offices at four-year universities hosting 1+ AI competitions per year (approximately 120 nationwide) and AI project departments at public institutions
Revenue Model: Per-competition Billing: $370 for up to 50 teams, $670 for up to 100 teams, $1,120 for up to 200 teams. Annual Subscription at 3x event price for unlimited competitions.
Ecosystem Role: Infrastructure
MVP Estimate: 2_weeks

NUMR-V Scores

N Novelty
3.0/5
U Urgency
2.0/5
M Market
2.0/5
R Realizability
3.0/5
V Validation
3.0/5
NUMR-V Scoring System
N Novelty1-5How uncommon the service is in market context.
U Urgency1-5How urgently users need this problem solved now.
M Market1-5Market size and growth potential from proxy indicators.
R Realizability1-5Buildability for a small team with realistic constraints.
V Validation1-5Validation signal quality from competition and demand data.
SaaS N=.15 U=.20 M=.15 R=.30 V=.20 Senior N=.25 U=.25 M=.05 R=.30 V=.15

Feasibility (69%)

Tech Complexity
29.3/40
Data Availability
19.4/25
MVP Timeline
20.0/20
API Bonus
0.0/15
Feasibility Breakdown
Tech Complexity/ 40Difficulty of core implementation stack.
Data Availability/ 25Practical availability and cost of required data.
MVP Timeline/ 20Expected time to ship a usable MVP.
API Bonus/ 15Bonus for viable public API leverage.

Market Validation (51/100)

Competition
8.0/20
Market Demand
9.4/20
Timing
14.0/20
Revenue Signals
7.5/15
Pick-Axe Fit
7.5/15
Solo Buildability
5.0/10
Validation Breakdown
Competition/ 20Signal quality from competitor landscape.
Market Demand/ 20Demand proxies from search and mention patterns.
Timing/ 20Fit with current shifts in tech, behavior, and regulation.
Revenue Signals/ 15Reference evidence for monetization viability.
Pick-Axe Fit/ 15How well the concept serves participants in a trend.
Solo Buildability/ 10Practicality for lean-team implementation.

Technical Requirements

Backend [medium] Frontend [low] AI/ML [medium]
Dashboard