33.4 Week-by-Week Content
Detailed content, activities, and guidance for each week of the course.
Week 1: Why AI Changes Products
This opening week establishes the fundamental shift that AI introduces to product development. Students examine how AI changes the economics of features, enables new interaction paradigms, and requires different quality assurance approaches.
Core concepts: AI as a new substrate, capability-driven vs. requirement-driven development, the concept of AI-native product thinking.
Lab activity: Product teardown. Students select an existing product and identify where AI could enhance or replace current functionality. They document findings in a short presentation.
Discussion prompts: What features in your favorite apps feel "AI-native"? What would need to change if AI were removed from products you use daily?
AI changes the economics of features: what was expensive becomes cheap, what was impossible becomes routine. Students leave week 1 seeing products differently.
Week 2: AI Capabilities, Limits, and Mental Models
Students develop accurate mental models for what AI can and cannot do. The week covers large language models, image generation, speech recognition, and other common AI capabilities alongside their failure modes.
Core concepts: Stochastic processes, hallucination, context windows, token limits, emergent behaviors, prompting as programming.
Lab activity: Capability mapping. Students create a matrix of AI capabilities vs. product features they might build, identifying where AI adds value and where it introduces risk.
Discussion prompts: Why do language models sometimes give confident wrong answers? How should product managers think about AI failures?
The reliability spectrum (reliable -> moderate -> unreliable) is the key mental model. Students should leave able to classify any AI task by reliability.
Week 3: AI-Native Discovery and Product Strategy
Discovery in AI products requires different techniques than traditional product discovery. Students learn to identify AI-native opportunities through constraint analysis, capability matching, and user workflow redesign.
Core concepts: Opportunity identification, constraint analysis, build-measure-learn for AI products, defining the AI MVP.
Lab activity: Opportunity workshop. Students use the AI opportunity canvas to identify and rank three potential opportunities for their course project.
Discussion prompts: How do you validate demand for an AI feature that did not exist before? What makes an AI opportunity "AI-native" vs. an AI enhancement of an existing feature?
Students fill out a canvas with:
Constraint: What prevents users from achieving their goal today?
Capability: What AI capability addresses this?
Value: How does AI uniquely enable this?
Risk: What AI failure modes matter here?
Week 4: AI UX and Trust Design
User experience design changes fundamentally when products contain probabilistic AI components. Students learn to design for trust, handle AI uncertainty visibly, and create appropriate user expectations.
Core concepts: Trust design, uncertainty communication, expectation-setting, graceful degradation, human-AI interaction patterns.
Lab activity: Trust audit. Students conduct a trust audit of an existing AI product, identifying trust-building features and trust-breaking failure modes.
Discussion prompts: When should an AI product tell users it is an AI? How do you design for trust when the AI will sometimes fail?
Trust is earned through transparency about limitations, consistency of behavior, and easy recovery when AI fails. Design for failure as much as success.
Week 5: Eval-First Requirements
Eval-first thinking defines success criteria before implementation. Students learn to write evalable requirements that make success measurable and failure diagnosable.
Core concepts: Evalable requirements, success metrics vs. vanity metrics, failure mode analysis, acceptance criteria for AI outputs.
Lab activity: Eval writing lab. Students write evals for their product concept, then refine them based on peer feedback.
Discussion prompts: Why is it harder to write requirements for AI than traditional software? How do you define "good enough" for a generative AI feature?
An eval answers: "How would we know if this AI feature is working?" Not user satisfaction surveys, but automated tests that measure AI output quality.
Week 6: Vibe coding and Prototyping
Rapid prototyping with AI tools lets teams test product concepts before committing to full development. Students learn vibe coding techniques that maximize learning while minimizing investment.
Core concepts: Prototype fidelity levels, vibe coding principles, prototype-to-production gap, rapid iteration, stakeholder demos.
Lab activity: Prototype session. Students build a functional prototype of their core feature using AI coding tools within a constrained time box.
Discussion prompts: When is a prototype "good enough" to learn from? How do you prevent prototype code from becoming production debt?
Goal: Build the core AI feature working end-to-end
Time: 2 hours vibe coding + 30 min testing
Success criteria: Can demonstrate the key user flow with real AI
No-goals: Production quality, edge cases, error handling
Week 7: Retrieval, Memory, and Orchestration
Real AI products typically combine language models with retrieval systems, memory stores, and orchestration layers. Students learn to architect these components for their product concepts.
Core concepts: RAG architecture, vector databases, memory patterns, agent orchestration, tool use and function calling.
Lab activity: RAG implementation. Students implement a basic retrieval-augmented generation system for their product, connecting a knowledge base to a language model.
Discussion prompts: When should an AI product retrieve information vs. rely on training data? How do you design memory that respects user privacy?
Use RAG when: domain knowledge is proprietary, information changes frequently, or accuracy on specific facts matters. Skip RAG when: general knowledge suffices and creative generation is the goal.
Week 8: Models, Routing, and Architecture
Model selection involves trade-offs across capability, cost, latency, and privacy. Students learn to make informed routing decisions and architect for model portability.
Core concepts: Model comparison, cost-latency trade-offs, model routing, multi-model architectures, vendor lock-in mitigation.
Lab activity: Model comparison. Students compare multiple models on their specific use case, measuring quality, cost, and latency to inform architecture decisions.
Discussion prompts: Should you use the most capable model available? How do you design for model switching if your vendor changes pricing?
Quality: Does it reliably do your task?
Cost: Per-query cost within budget?
Latency: Meets your response time requirements?
Privacy: Data handling meets your requirements?
Week 9: Evals and Observability
Comprehensive eval suites enable confident iteration. Students learn to build eval suites that catch regressions, measure improvement, and provide early warning of quality drift.
Core concepts: Eval suite design, automated evals, sampling strategies, observability patterns, alerting and dashboards.
Lab activity: Eval suite build. Students build a comprehensive eval suite for their product that covers happy paths, edge cases, and known failure modes.
Discussion prompts: How many evals does a production AI product need? When should you prioritize recall vs. precision in your evals?
Happy path evals: Does the main user journey work?
Edge case evals: Does it handle unusual inputs?
Known failure mode evals: Has the bug we fixed stayed fixed?
Regression evals: Has quality changed since last release?
Week 10: Governance, Security, and Trust
AI products require governance frameworks that address data privacy, model security, bias detection, and regulatory compliance. Students learn to build governance into their products from the start.
Core concepts: AI governance frameworks, data retention and privacy, bias detection and mitigation, regulatory compliance (GDPR, AI Act), security hardening.
Lab activity: Risk assessment. Students conduct a comprehensive risk assessment of their product, identifying governance measures needed before launch.
Discussion prompts: What governance measures should every AI product implement? How do you balance user privacy with personalization?
Every AI product needs: data retention limits, bias testing protocol, incident response plan, human oversight mechanism, and transparency documentation.
Week 11: Launch, Metrics, and Post-Launch Learning
Launching AI products requires different metrics and monitoring than traditional software. Students learn to define launch criteria, set up monitoring, and establish post-launch learning loops.
Core concepts: Launch criteria, feature flags, shadow mode deployment, A/B testing for AI, behavioral analytics, incident response.
Lab activity: Launch simulation. Students create a launch plan with staged rollout, monitoring dashboard, and incident response procedures.
Discussion prompts: How do you know when an AI product is "ready" to launch? What metrics matter more than accuracy for AI products?
Beyond accuracy: Task completion rate, time-to-value, trust indicators, override frequency, user sentiment
Watch: Quality drift over time, failure mode emergence, demographic performance gaps
Week 12: Capstone Demos and Postmortems
The final week showcases student work and facilitates reflection on the product development journey. Students present functional products and conduct postmortem analysis.
Core concepts: Demo preparation, storytelling for technical products, retrospective techniques, knowledge transfer.
Lab activity: Final presentations. Students present 15-minute demos of their products, followed by peer Q&A and instructor evaluation.
Discussion prompts: What would you do differently if you started this project today? What surprised you most about building with AI?
5 min: Problem and approach (why this, why AI)
7 min: Live demo of working product
3 min: What you learned and what is next
5 min: Q&A from peers and instructor
Slide Recommendations
Each week includes suggested slide decks available in the instructor resources. Week one offers twenty slides covering AI paradigm shift. Week two provides twenty-five slides on AI capabilities and limitations. Week three contains twenty-two slides on AI-native discovery. Week four has twenty-eight slides on trust design patterns. Week five includes eighteen slides on eval-first requirements. Week six offers twenty slides on prototyping techniques. Week seven provides thirty slides on RAG and memory. Week eight contains twenty-four slides on model selection. Week nine has twenty-six slides on observability. Week ten offers twenty-two slides on governance. Week eleven provides twenty slides on launch planning. Week twelve includes ten slides for capstone presentations.