Part I: Why AI Changes Product Creation
Chapter 3

The Human-AI Product Stack

3.1 Visual Model of the Human-AI Stack

Objective: Understand the layered architecture that describes how human judgment and AI capabilities combine in successful AI products.

"The best AI products do not replace human judgment. They create a system where human and AI strengths complement each other."

The Human-AI Collaboration Handbook

3.1 Visual Model of the Human-AI Stack

The Human-AI Product Stack is a framework for understanding how AI capabilities and human judgment interact in successful AI products. Rather than viewing AI as an autonomous agent or a simple tool, the stack perspective helps product teams think about the appropriate roles for human and machine at each level of product functionality.

The Five Layers of the Stack

The Human-AI Product Stack

Layer 5: Human Judgment (Strategy and Meaning)

Humans define goals, evaluate outcomes, and determine what matters. AI cannot replace human values or strategic direction.

Layer 4: Human Decision (Selection Among Options)

Humans choose between AI-generated options, approve AI recommendations, and make final calls on high-stakes decisions.

Layer 3: AI Decision (Autonomous AI Action)

AI makes decisions within defined parameters, taking actions that do not require human approval for every instance.

Layer 2: AI Recommendation (AI Suggests)

AI generates suggestions, options, or analysis that humans review and decide whether to act upon.

Layer 1: AI Automation (Background Processing)

AI handles routine tasks invisibly, without presenting options or requiring human input.

Understanding the Layers

Layer 1: AI Automation (Background Processing)

At the lowest layer, AI operates invisibly, handling tasks that do not require human attention. Users may not even know AI is involved. Examples include spam filtering in email systems, auto-correction in text input, fraud detection that flags transactions, and content recommendation in feeds.

Moving up the stack, we encounter layers that increasingly involve human-AI interaction.

Layer 2: AI Recommendation

AI presents options or suggestions that humans can accept or reject. The key characteristic is that AI proposes but humans decide. Examples include reply suggestions in email, code completion suggestions in IDEs, product recommendations during shopping, and route suggestions in navigation apps.

Layer 3: AI Decision

AI makes decisions autonomously within defined boundaries. Humans set the parameters and monitor outcomes, but do not review every individual decision. Examples include adaptive pricing in real-time, automated content moderation, self-driving features in vehicles, and automated inventory reordering.

At higher stakes, the balance shifts back toward human judgment.

Layer 4: Human Decision

AI generates options or analysis, but humans make the actual decision. This layer is appropriate when decisions have high stakes or irreversible consequences, when legal or regulatory requirements mandate human judgment, when trust is not yet established for AI to decide autonomously, or when decisions require contextual understanding that AI cannot access.

At the top of the stack, human judgment provides strategic direction that AI cannot replicate.

Layer 5: Human Judgment

Humans define strategic direction, values, and evaluation criteria. AI cannot determine what goals are important or assess whether outcomes are meaningful. This layer sets the context for all other layers.

Running Product: QuickShip Logistics

QuickShip's routing system demonstrates the layered stack. Layer 1 (Automation) handles background tracking of package locations and routine address verification. Layer 2 (Recommendation) provides AI-suggested optimal routes and driver offers delivery time estimates. Layer 3 (AI Decision) has the AI select carrier for each package within pricing parameters set by humans. Layer 4 (Human Decision) involves customer service reps approving exception handling for damaged packages. Layer 5 (Human Judgment) has business leaders defining service level priorities and customer promises.

Real products move across these layers as trust is established and capabilities evolve.

Moving Between Layers

Products can move between layers as trust is established and reliability is proven. The natural progression is:

Layer Progression Principles

Products should start conservative, since new AI features typically begin at Layer 2 (Recommendation) or Layer 3 (Decision) with narrow scope. They should earn trust through reliability by moving to higher autonomy only after demonstrating consistent accuracy. Products should allow user control by letting users choose their preferred layer based on their trust and context. They should consider stakes and reversibility, since higher stakes and less reversible actions warrant higher layers.

Eval-First in Practice

Before deciding which layer your AI should operate at, define how you will measure appropriate layer assignment. In Human-AI stack design, this means establishing eval criteria for each layer transition. A micro-eval for layer assignment: given 50 diverse user tasks, what percentage require human judgment versus routine processing? Without this eval-first analysis, products default to unsafe autonomy or excessive human involvement.

What's Next?

Next, we explore AI as Amplifier, Not Replacement, examining how AI best augments human capabilities rather than attempting to replace human judgment entirely.