Siyuan Jiang

AI × Education × Product

Building experiences that make people more capable

Engineering the Capstone

I designed a sprint-based engineering curriculum, taught the capstone for 12 semesters since 2019, and iterated against measured outcomes.

The System

01

Sprint-Based Curriculum

Agile sprints with product backlog items, stand-ups, and retrospectives. Students experience the same workflow they'll use in industry.

02

Real Team Roles

Scrum Master, Front-end Dev, Back-end Dev, AI Dev, Documentation Specialist. Teams refine these roles over the semester to fit their project, and each student owns what they take on.

03

Engineering Discipline

Branching strategies, PR-based code reviews, deployment pipelines. No shortcuts — teams learn by doing it right.

04

Outcome Measurement

Peer reviews, quality rubrics, and end-of-semester surveys. Success is measured by what ships and what students learn — and each cohort shapes the next.

12Semesters
40+Teams
60+Students / year
6Tech Stacks

Recent Shipped Products

Vybe screenshot

Vybe

Cross-platform music sharing app that bridges Spotify and YouTube. Shared groups, combined playlists, and Song of the Day feature.

React Supabase TypeScript
CineMatch screenshot

CineMatch

Movie recommendation engine that learns from user ratings. Rate movies you've seen, get personalized suggestions, save favorites.

React Node.js MongoDB
PlanIt screenshot

PlanIt

Trip planning web app for organizing destinations, activities, and travel times in a single interactive itinerary.

React Node.js Vite
Survival Chess screenshot

Survival Chess

Arcade-style chess survival game — defend against waves of attacking pieces. Built by an 8-person team using SCRUM, with pre-commit hooks, ESLint, and Jest tests.

React Vite Jest

"Students don't just learn to code — they learn to ship. Each team runs sprints, manages a backlog, does code reviews, and deploys to production infrastructure."

The Research Insight

What happens when you give students AI assistance for software requirements? I ran the experiment.

In a study across 2 universities, 48 students wrote 406 user stories — first without AI, then with guided GenAI assistance. We measured quality across 7 dimensions using the INVEST framework.

How AI Assistance Changed User Story Quality

Mean scores by INVEST attribute (0–1 scale). Click any bar pair for details.

Without AI With AI

AI helps with well-structured tasks. Structure (0.63 → 0.88) and Testability (0.65 → 0.86) showed the largest gains — GenAI excels at producing content that maps onto fixed templates and enumerated criteria.

AI can erode judgment-based skills. The "Small" attribute declined at Site A (0.88 → 0.67). We call this the complexity trap: AI generates plausible content that is wrong in scope, and students can't tell.

"The pitfall is not that GenAI generates poor content — it generates plausible content that is wrong in scope. This makes the failure harder for students to detect, since the output still looks polished."