Course Feedback & Learning Outcomes
Go beyond ratings to understand how students actually experienced the learning journey.
The Brief
The sender described what they wanted to learn. Willit's AI refined these instructions into a natural interview flow.
The Interview
Willit's AI detective conducted a standard interview with a Recent Graduate. The conversation explored 6 topic areas through natural follow-up questions, adapting in real-time based on the participant's responses.
The Report
Willit automatically extracted structured insights from the conversation — scores, goal coverage, key quotes, and red flags.
Interview Scorecard
Metric Averages
Summary
The graduate had a strong overall experience and credits the capstone project specifically with transforming their portfolio from 'student work' to 'hirable work.' Primary criticisms center on the research methods module moving too fast and the job search support feeling generic. The graduate secured a junior UX role 6 weeks after graduation, which they attribute partly to the program and partly to their own networking.
Goal Coverage
Understand the learning arc and progression
- Weeks 1-3 felt slow — mostly concepts they'd encountered in books. Engagement spiked in week 5 when the first real project started
- Week 8 was overwhelming — three deliverables due simultaneously with no buffer time built into the schedule
Identify the most impactful modules
- The capstone project was cited as the single highest-impact experience — described as 'the moment the whole thing came together'
- The usability testing module with real participants changed how they approached research — cannot replicate with simulated exercises
Surface job-readiness gaps
- Feels under-prepared for research synthesis — knows how to conduct research but not how to turn findings into design decisions quickly
- Design system fundamentals were only covered superficially — employers are asking about Figma component systems in interviews
Assess cohort and peer learning experience
- Peer feedback sessions in weeks 6 and 10 were described as the most valuable community interactions
- Async Slack community was noisy and hard to navigate — stopped reading it after week 4
Gap: Did not explore whether they formed lasting professional relationships with cohort members
Evaluate instructor quality
- Lead instructor was excellent — clear, approachable, gave detailed feedback on the capstone
- Guest instructors in weeks 3-4 were uneven in quality — one was 'clearly just reading slides'
Gap: Did not explore whether they felt comfortable reaching out to instructors for help during the program
Understand post-graduation outcomes
- Accepted a junior UX role at a product agency 6 weeks after graduation — salary 22% above previous role
Key Quotes
“The capstone was the first time I felt like a designer, not a student. That's the thing I put in front of every interviewer.”
“I know how to run a user interview. What nobody taught me is what to do with all the data after.”
“Week 8 nearly broke me. Three things due at once with no warning. A couple of people from my cohort just disappeared after that.”
Red Flags
- Week 8 scheduling crunch caused cohort attrition — delivery pace needs to be rebalanced
- Research synthesis gap is a real job-readiness failure — the curriculum stops at data collection and doesn't teach the analysis process
- Guest instructor quality is inconsistent — one was described as reading slides, which reflects on the program's overall credibility
Follow-up Suggestions
- Add a dedicated research synthesis module (affinity mapping, insight generation) — position it immediately after the usability testing module
- Audit week 8 deliverable schedule and distribute load across weeks 7 and 9
- Review all guest instructor slide decks and require pre-session prep calls with guest instructors before their session
Ready to run your own AI interviews?
Set up your first interview in under 5 minutes.