Part II: Discovery and Design
Chapter 7.4

Rapid Concept Generation

The hardest part of ideation is not generating ideas. It is generating the right ideas and recognizing them when they appear. AI can accelerate the generation phase dramatically, producing hundreds of concepts in minutes. But AI-generated ideas need human judgment to filter, combine, and select. The teams that win with AI ideation are those that know how to prompt effectively, filter ruthlessly, and recognize the ideas worth developing.

Ideation Irony

AI can generate 100 startup ideas in 30 seconds. The problem is that 99 of them are obvious: "AI-powered X for Y." The one interesting idea requires human judgment to find, and that's the one that matters.

Using AI for Ideation

LLMs are trained on vast amounts of human knowledge about problems, solutions, technologies, and products. This knowledge makes them capable ideation partners that can suggest concepts across domains, combine ideas in novel ways, and push beyond the obvious solutions that humans default to.

The Ideation Prompt Framework

Effective AI ideation requires structured prompts that give the model the right context and constraints. The best ideation prompts include: the problem space, success criteria, constraints, and the type of ideas you want.

Ideation Prompt Template
CONTEXT:
We are building [product type] for [target user] who struggle with [problem].

CURRENT STATE:
Today they solve this by [existing approach].
Our existing product does [current capability].

GOAL:
Generate concepts that [desired outcome].
Focus on ideas that [specific focus criteria].

CONSTRAINTS:
- Must work with [technical constraint]
- Should avoid [known anti-patterns]
- Consider [relevant context]

OUTPUT FORMAT:
For each concept:
1. Concept name and one-line description
2. How it addresses the problem (the "why")
3. Key challenges to implementation
4. Estimated complexity (Low/Medium/High)
5. Why this concept is distinctive (not obvious)

Generate 15-20 concepts spanning:
- Incremental improvements to current approach
- Significant reimagining of the solution
- Wild ideas that might seem impossible but have kernel of value

This structure produces more useful ideation output than open-ended prompts.

Domain-Aware vs. Cross-Domain Ideation

AI ideation can operate in two modes: domain-aware (staying within established category patterns) and cross-domain (applying patterns from unrelated fields). Both modes produce value, and alternating between them generates a fuller solution space.

Two Ideation Modes

Domain-Aware Ideation explores the solution space within your industry or category, useful for differentiation within existing patterns. It asks what leading competitors do and what you can do better, what industry-specific capabilities you could combine in new ways, and what adjacent problems you could solve for your current users.

Cross-Domain Ideation applies patterns from unrelated fields, useful for breakthrough concepts that redefine categories. It asks how an unrelated industry solves a similar problem, what technology from a different field could apply here, and what consumers expect from products in a different category.

Expanding Solution Spaces

The quality of your final solution depends on the breadth of your solution space. If you evaluate ideas from a narrow solution space, you are optimizing within constraints you set yourself. AI can dramatically expand the solution space by suggesting directions you would not have considered.

The Assumption Challenge

Every product team works within assumptions about what is possible, what users will accept, and what the market wants. These assumptions constrain the solution space. AI ideation can challenge these assumptions by generating ideas that violate the constraints you have taken for granted.

EduGen: Challenging Assumptions About Learning Format

EduGen started with an assumption: vocational learners want video-based courses. This assumption constrained their solution space to variations on video content.

AI-assisted assumption challenging generated three breakthrough concepts. Challenge 1 posed "what if learning did not require dedicated time?" and led to the concept of microlearning triggered by job context, enabling learning while doing rather than in separate study sessions. Challenge 2 asked "what if content adapted without algorithms?" and led to the concept of peer-adaptive learning where learners teach each other rather than relying solely on automated systems. Challenge 3 explored "what if certification tracked skills, not courses?" and led to the concept of skill graph-based credentialing that recognizes demonstrated competencies.

The peer-adaptive learning concept became one of EduGen's most distinctive features, despite initial skepticism from the team. It emerged from an assumption challenge that the team had never posed to themselves.

Lateral Thinking Prompts

Lateral thinking generates ideas by approaching problems from unexpected angles. AI can execute lateral thinking prompts that humans might find uncomfortable or strange.

Lateral Thinking Prompt Examples

Use these prompts to expand beyond conventional solution spaces. Ask what the opposite approach would look like, forcing consideration of solutions that invert your current direction. Ask how a competitor would solve this problem, leveraging competitive perspective. Ask what if cost was not a constraint, removing economic assumptions. Ask what if users had no technology, challenging technology-dependent thinking. Ask what a 10x improvement would look like versus a 10 percent improvement, pushing beyond incremental thinking. Ask what you would build if you had no existing codebase, eliminating path dependency. Ask what users pretend to want versus what they actually want, exposing stated versus real needs.

Combinatorial Innovation

Innovation often comes from combining existing concepts in new ways. AI is particularly good at combinatorial thinking because it can draw on knowledge across domains and identify non-obvious combinations.

The Combination Matrix

A combination matrix maps existing concepts against each other to identify unexplored cells. AI can generate the matrix and identify which combinations are novel.

Concept Combination Framework
STEP 1: Identify Core Dimensions
   List the key dimensions of your problem space:
   - User needs (from JTBD analysis)
   - Technical approaches (AI techniques, UX patterns)
   - Delivery mechanisms (mobile, web, voice, AR)
   - Content types (text, video, interactive, gamified)
   - Business models (subscription, transaction, freemium)

STEP 2: Generate the Matrix
   For each combination of dimensions, ask:
   - Has this combination been tried?
   - If yes, how can we do it better?
   - If no, what makes it possible now?

STEP 3: Evaluate Promising Combinations
   For combinations that seem novel:
   - What is the core value proposition?
   - What technical barriers exist?
   - How quickly can we test the concept?

STEP 4: Select for Development
   Prioritize combinations that are:
   - Novel (not just incremental)
   - Feasible (technology exists or is developable)
   - Valuable (addresses real user need)
   - Ownable (we can execute better than others)

Combinatorial innovation systematically explores the solution space for non-obvious combinations.

The Adjacent Possible

Steven Johnson coined the term "adjacent possible" to describe the space of ideas that are one step away from what currently exists. AI can help identify the adjacent possible by showing you which combinations are one conceptual step from existing successful products.

RetailMind: Finding the Adjacent Possible

RetailMind wanted to expand from their shopping assistant into a broader in-store AI offering. They used AI to map the adjacent possible from their current position.

Current position: AI shopping assistant on tablets

Adjacent expansion options identified included one-step expansions to an AI assistant for store employees as an internal tool and inventory prediction based on shopping patterns. Two-step expansions included personalized store layouts based on customer segments and predictive staffing based on anticipated traffic. Three steps away was the concept of fully autonomous store operations, representing the farthest frontier of the adjacent possible from their current position.

The team chose to pursue inventory prediction first (high adjacent possible value, reasonable technical risk) while designing the employee assistant with future internal integration in mind. This balanced near-term opportunity with long-term platform potential.

Filtering and Prioritizing Ideas

Generating hundreds of ideas is easy. Filtering to the ones worth developing is hard. The filtering process requires judgment about feasibility, value, and strategic fit that AI cannot fully replicate. But AI can assist by providing structured evaluation criteria and helping to apply them consistently.

The Three-Layer Filter

Idea filtering works best as a three-layer process that progressively narrows the field:

The Three-Layer Filtering Process

Layer 1: Elimination

Remove ideas that are technically impossible with current technology, legally or ethically problematic, completely outside our strategic scope, or dependent on assumptions we know to be false.

Layer 2: Scoring

Score remaining ideas on Value (how much does it help users, rated 1-10), Feasibility (can we build it, rated 1-10), Differentiation (does it set us apart, rated 1-10), and Strategic fit (does it serve our direction, rated 1-10).

Layer 3: Judgment

Discuss top-scored ideas as a team, asking whether we believe in this direction, what would have to be true for this to succeed, what we are betting will not be true, and whether we can learn faster than we can build.

Eval-First in Practice

Before committing to any filtered concept, define how you will measure concept quality and selection accuracy. A micro-eval for concept generation tracks: selection rate versus eventual success (did high-scored ideas actually succeed?), false negative rate (did we reject ideas that would have worked?), and time-to-validation (how quickly did good concepts prove themselves). EduGen's eval-first insight: they tracked their Layer 3 judgment calls and found that human "belief" scores correlated 2x better with eventual success than Layer 2 numerical scores. They updated their process to weight human conviction more heavily.

The WTP vs. WTA Balance

Every concept requires trade-offs. Willingness to pay (WTP) versus willingness to adopt (WTA) is a useful lens for evaluating concepts. Great concepts have high user value (high WTA) and create sustainable business value (high WTP).

The High-WTA / Low-WTP Trap

AI can generate concepts that users love but that nobody will pay for. This happens when the concept requires ongoing human involvement that is too expensive to scale, when it addresses a problem users have but do not prioritize paying to solve, or when it requires infrastructure or integration that is too costly for the target market. Always evaluate concepts for both user value and business model viability, not just user value alone.

QuickShip: Filtering Concepts to One Winner

QuickShip generated 22 concept directions through AI-assisted ideation. They filtered down to one through the three-layer process. Layer 1 eliminated 11 concepts that were technically infeasible, off-strategy, or dependent on false assumptions. Layer 2 scored the 11 remaining concepts on value, feasibility, differentiation, and strategic fit. The top scores were the carrier recommendation engine at 8.2, predictive delivery estimates at 7.8, and returns automation at 7.1.

Layer 3 judgment discussion:

The team debated carrier recommendation versus predictive estimates. Both had high scores. The deciding factor was learning velocity: they could test carrier recommendation with a simple rules-based prototype in one week. Predictive estimates required months of data infrastructure work before any test was possible. They chose carrier recommendation and validated the concept with a prototype in 10 days.

The Concept Brief

Each concept that survives filtering deserves a written brief that documents the judgment that went into selection. The brief becomes a reference point for the team as they develop the concept and a tool for aligning stakeholders.

Concept Brief Template
CONCEPT BRIEF: [Name]

Problem Addressed:
- What user job does this address?
- How does it connect to discovered unmet need?

Concept Summary:
- One paragraph describing the concept
- The core user experience in one sentence

Why AI:
- Why is AI necessary here?
- What can AI do that humans or rules cannot?

Strategic Rationale:
- Why this concept over others?
- How does it fit our platform and direction?

Success Metrics:
- How will we know if this concept succeeds?
- What quantitative targets define success?

Open Questions:
- What do we not know that we need to learn?
- What assumptions must prove true for this to work?

Time to Prototype:
- How quickly can we test the core hypothesis?
- What is the minimum viable test?

Every concept that proceeds to development should have a written brief documenting the judgment behind the decision.

Key Takeaways

AI ideation accelerates the generation phase but requires structured prompts and human judgment for filtering to separate valuable concepts from noise. Domain-aware and cross-domain ideation serve different purposes: the former enables differentiation within existing patterns while the latter enables breakthrough concepts that redefine categories. AI can challenge assumptions that constrain the solution space, revealing the adjacent possible and expanding the range of considered solutions. Combinatorial innovation systematically explores novel combinations of existing concepts to identify non-obvious opportunities. Idea filtering works best as a three-layer process combining elimination, scoring, and human judgment rather than relying on any single approach. The high-WTA/low-WTP trap catches teams that optimize for user love without validating business model viability, leading to beloved products that nobody will pay for.

Exercise: Running an AI Ideation Session

Practice AI-assisted concept generation for a problem you are working on by working through these steps. First, write a structured ideation prompt for your problem space that provides the context, goals, and constraints needed for effective AI ideation. Second, generate 15-20 concepts using an LLM with your structured prompt. Third, apply the three-layer filter to narrow to 3-5 promising concepts, eliminating impossible ideas, scoring the rest, and applying human judgment. Fourth, write concept briefs for your top 3 that document the problem addressed, concept summary, why AI is necessary, strategic rationale, success metrics, open questions, and time to prototype. Fifth, identify the fastest way to prototype test each concept to validate assumptions quickly. Sixth, reflect on where AI added the most value in the process and where human judgment remained essential.

What's Next

In Section 7.5, we explore Research Validity Traps, examining when LLMs hallucinate market data, how confirmation bias infects AI-assisted research, what validation is required, and how to maintain human oversight in discovery.