B

Education Office Chatbot Performance Dashboard

2.85

Derivation Chain

Step 1 Education office AI chatbot adoption
Step 2 Education office inquiry chatbot operations
Step 3 Chatbot performance analytics and improvement tool

Problem

Provincial and metropolitan offices of education are deploying and upgrading public inquiry chatbots, but lack systematic tools to measure chatbot response accuracy, issue resolution rates, and user satisfaction — making it impossible to prove whether chatbots are actually effective. Offices like the Busan Metropolitan Office of Education invest hundreds of millions of won in chatbot upgrades while a single staff member manually compiles effectiveness metrics in Excel, taking 1-2 weeks per quarter.

Solution

Automatically collects education office chatbot logs and visualizes response accuracy, unresolved escalation rates (chatbot-to-phone), and resolution time by inquiry type on a real-time dashboard. Provides LLM-based automated response quality scoring, unresolved inquiry type clustering, and auto-generated performance reports for legislative and audit reporting.

Target: IT departments at 17 provincial/metropolitan offices of education and subordinate education support offices (public administration professionals, ages 30-50)
Revenue Model: SaaS monthly subscription at ~$295 (390,000 KRW)/education office; usage-based pricing of ~$0.0004 (0.5 KRW) per log entry beyond 100K entries
Ecosystem Role: Supplier
MVP Estimate: 2_weeks

NUMR-V Scores

N Novelty
3.0/5
U Urgency
3.0/5
M Market
2.0/5
R Realizability
3.0/5
V Validation
3.0/5
NUMR-V Scoring System
N Novelty1-5How uncommon the service is in market context.
U Urgency1-5How urgently users need this problem solved now.
M Market1-5Market size and growth potential from proxy indicators.
R Realizability1-5Buildability for a small team with realistic constraints.
V Validation1-5Validation signal quality from competition and demand data.
SaaS N=.15 U=.20 M=.15 R=.30 V=.20 Senior N=.25 U=.25 M=.05 R=.30 V=.15

Feasibility (63%)

Tech Complexity
24.0/40
Data Availability
19.4/25
MVP Timeline
20.0/20
API Bonus
0.0/15
Feasibility Breakdown
Tech Complexity/ 40Difficulty of core implementation stack.
Data Availability/ 25Practical availability and cost of required data.
MVP Timeline/ 20Expected time to ship a usable MVP.
API Bonus/ 15Bonus for viable public API leverage.

Market Validation (51/100)

Competition
8.0/20
Market Demand
9.4/20
Timing
14.0/20
Revenue Signals
7.5/15
Pick-Axe Fit
7.5/15
Solo Buildability
5.0/10
Validation Breakdown
Competition/ 20Signal quality from competitor landscape.
Market Demand/ 20Demand proxies from search and mention patterns.
Timing/ 20Fit with current shifts in tech, behavior, and regulation.
Revenue Signals/ 15Reference evidence for monetization viability.
Pick-Axe Fit/ 15How well the concept serves participants in a trend.
Solo Buildability/ 10Practicality for lean-team implementation.

Technical Requirements

Backend [medium] Frontend [medium] AI/ML [medium]
Dashboard