Part III: Vibe-Coding and AI-Native Prototyping
Chapter 11.3

Spec-Prototype-Critique-Revise Loops

Iteration is where learning happens. The spec-prototype-critique-revise loop transforms uncertainty into clarity through rapid cycling.

The SPCR Loop Structure

The SPCR loop is a disciplined approach to iterative prototyping. Each iteration produces artifacts that inform the next iteration. The loop continues until the prototype answers the questions it was designed to answer.

The SPCR Loop

The SPCR loop structures iterative prototyping through four distinct phases. The Spec phase involves writing a specification for the next increment that states what should be accomplished without prescribing how. The Prototype phase uses vibe coding to generate implementation from the specification. The Critique phase evaluates the prototype against success criteria established during framing. The Revise phase updates the spec based on critique findings and repeats the cycle until the prototype answers the questions it was designed to address.

The Spec Phase

The spec phase defines what the next iteration should accomplish. Good specs are focused, achievable within the iteration timebox, and produce evaluable results.

Spec Writing Principles

One concept per spec. Trying to accomplish too much in one iteration dilutes focus and makes evaluation difficult.

State what, not how. The spec should describe the outcome, not the implementation. "Users can see exception reason classification with confidence scores" is a good spec. "Call the GPT-4 API with prompt X and display results in component Y" is a bad spec.

Include acceptance criteria. How will you know if the spec is met? Define measurable criteria before generation.

QuickShip: Spec Example

Spec: Display email exception queue with auto-classification

What: Show list of unprocessed emails with GPT-4 classification results displayed inline. Each email shows predicted exception type and confidence score.

Acceptance criteria: Unprocessed emails appear in queue within 30 seconds of receipt. Classification results appear within 5 seconds of email display. Confidence score shown as percentage. User can override classification with one click. Queue refreshes automatically without page reload.

The Prototype Phase

The prototype phase uses vibe coding to generate implementation from the spec. The goal is not perfect code but working implementation that can be evaluated against acceptance criteria.

Generation Approach

Feed the spec to the AI along with relevant context. Request implementation that meets acceptance criteria. If the first attempt does not meet criteria, do not immediately revise. Generate multiple approaches to see different implementation strategies.

Timeboxing

Timebox prototype generation. If you have allocated two hours for the prototype phase and you are one hour in with no working code, switch tactics. Generate simpler implementation, ask more specific questions, or pivot to directive mode.

The Critique Phase

The critique phase evaluates the prototype against the spec and success criteria. This is where learning happens if you are honest about the results.

Critique Framework

Evaluate each prototype on four dimensions. Completeness asks whether the prototype does what the spec requires. Correctness asks whether it does it right, meaning the implementation is sound. Usability asks whether users can accomplish their goals with the interface provided. Feasibility asks whether it can be built into production, considering architecture, security, and maintainability.

Documenting Critique

Write down critique findings. What worked, what did not, what is uncertain. This documentation informs the revise phase and future iterations.

The Five-Question Critique

After each iteration, ask five diagnostic questions to guide your critique. First, what did this prototype accomplish well, identifying strengths to build upon? Second, what did it fail to accomplish, identifying gaps between intention and outcome? Third, what surprised me either positively or negatively, surfacing unexpected results that warrant attention? Fourth, what do I not yet understand, identifying areas requiring further investigation? Fifth, what should I try differently in the next iteration, translating lessons into actionable changes?

The Revise Phase

The revise phase updates the overall approach based on critique findings. It is not just refining the current prototype but potentially rethinking the direction if critique reveals that the current approach is flawed.

Revise Decisions

After critique, you face four possible decisions. Continue means the next iteration should extend or refine the current approach, suggesting the direction is sound. Redirect means keeping what's valuable while fundamentally changing direction, useful when part of the approach works but the overall path needs adjustment. Simplify means the current approach is too complex and should be stripped to essentials before retrying. Terminate means this direction is not viable and you should pivot to a different approach, which is not failure but rather intelligent resource allocation.

When to Terminate

Termination is not failure. Terminating a direction that is not working frees resources for directions that might work. Kill criteria established during framing help make termination decisions objective rather than emotional.

Loop Velocity

The value of the SPCR loop is velocity. Each cycle should be short enough to maintain momentum but long enough to produce evaluable results. Most prototyping iterations should complete in one to three days.

Iteration Velocity Guidelines

Iteration velocity varies by prototype class. Feasibility prototypes benefit from 1-2 day iterations with 1-3 typical iterations, since the goal is quickly determining technical viability. Desirability prototypes typically run 2-3 day iterations with 3-5 iterations to properly evaluate user response. Viability prototypes require 1-2 week iterations with 2-4 iterations to validate economic and operational models. Implementation prototypes span 2-4 weeks with only 1-2 iterations since the focus shifts to production-quality output.

QuickShip SPCR Example

QuickShip's exception handler prototyping demonstrates the SPCR loop in practice:

Iteration 1: The spec called for a basic email queue display with manual classification, and the team vibe-coded a queue view with a dropdown for classification. The critique found it functional but slow, with users unlikely to use it daily. The revise decision was to automate classification and improve performance.

Iteration 2: The spec called for auto-classification with GPT-4 and confidence display. The prototype added GPT-4 integration with confidence scores. The critique found accuracy at 78%, not yet good enough, with confidence calibration seeming off. The revise decision was to improve prompt engineering before adding features.

Iteration 3: The spec called for an improved classification prompt with threshold for low-confidence alerts. The prototype implemented a new prompt with examples and confidence thresholds. The critique found accuracy improved to 89% and low-confidence alerts useful. The revise decision was to continue polishing and add a user feedback mechanism.

Key Takeaways

The SPCR loop structures iteration through Spec, Prototype, Critique, and Revise phases that repeat until the prototype answers its guiding questions. Specs should state what not how, with clear acceptance criteria that define success before generation begins. Critique evaluates completeness, correctness, usability, and feasibility to provide comprehensive assessment. Revise decisions include continue, redirect, simplify, or terminate, each appropriate in different situations. Iteration velocity varies by prototype class, so feasibility iterations are short while implementation iterations are longer. Termination is not failure; it frees resources for viable directions when the evidence suggests a different path is needed.

Exercise: Running an SPCR Cycle

Apply one SPCR cycle to your current project by working through each phase. First, write a spec for the next increment that states what you want to accomplish without prescribing how. Second, prototype it using vibe coding to generate working implementation. Third, critique using the five-question framework to honestly assess what worked and what did not. Fourth, make a revise decision and document your reasoning so future iterations benefit from this learning.

What's Next

In Section 11.4, we examine Working with AI-Generated UI, Backend, and Data Mocks, exploring patterns for generating and integrating AI-created components.