AI × Education × Product
Building experiences that make people more capable
I designed a sprint-based engineering curriculum, taught the capstone for 12 semesters since 2019, and iterated against measured outcomes.
Agile sprints with product backlog items, stand-ups, and retrospectives. Students experience the same workflow they'll use in industry.
Scrum Master, Front-end Dev, Back-end Dev, AI Dev, Documentation Specialist. Teams refine these roles over the semester to fit their project, and each student owns what they take on.
Branching strategies, PR-based code reviews, deployment pipelines. No shortcuts — teams learn by doing it right.
Peer reviews, quality rubrics, and end-of-semester surveys. Success is measured by what ships and what students learn — and each cohort shapes the next.
Four recent examples from 2025 — full-stack applications and games shipped in one semester by student teams I guided through engineering, process, communication, and final demo.
Cross-platform music sharing app that bridges Spotify and YouTube. Shared groups, combined playlists, and Song of the Day feature.
Movie recommendation engine that learns from user ratings. Rate movies you've seen, get personalized suggestions, save favorites.
Trip planning web app for organizing destinations, activities, and travel times in a single interactive itinerary.
"Students don't just learn to code — they learn to ship. Each team runs sprints, manages a backlog, does code reviews, and deploys to production infrastructure."
What happens when you give students AI assistance for software requirements? I ran the experiment.
In a study across 2 universities, 48 students wrote 406 user stories — first without AI, then with guided GenAI assistance. We measured quality across 7 dimensions using the INVEST framework.
Mean scores by INVEST attribute (0–1 scale). Click any bar pair for details.
AI helps with well-structured tasks. Structure (0.63 → 0.88) and Testability (0.65 → 0.86) showed the largest gains — GenAI excels at producing content that maps onto fixed templates and enumerated criteria.
AI can erode judgment-based skills. The "Small" attribute declined at Site A (0.88 → 0.67). We call this the complexity trap: AI generates plausible content that is wrong in scope, and students can't tell.
"The pitfall is not that GenAI generates poor content — it generates plausible content that is wrong in scope. This makes the failure harder for students to detect, since the output still looks polished."