Part I: Why AI Changes
Chapter 4, Section 4.4

Control Loops and Feedback System Design

Every AI product is a control system. The AI observes some aspect of the world, takes action based on its observations, the world changes, and the AI observes again. Understanding AI products through the lens of control theory reveals why some AI deployments succeed and others oscillate wildly or drift into failure modes that no one anticipated.

Oscillation Example

Recommendation algorithms once learned that people who searched for "diet plans" might also be interested in "weight loss pills." This created a feedback loop: search diet, see pills, click pills, see more diet ads. Somewhere, a teenager's harmless "how to lose belly fat" search turned into an infinite wellness content loop.

AI Products as Control Systems

Control theory, developed originally for mechanical and electrical systems, provides powerful frameworks for understanding AI products. A control system has sensors (ways of observing the world), actuators (ways of affecting the world), feedback (information about the results of actions), and a controller (logic that decides actions based on observations and goals).

AI products fit this model naturally. An AI code assistant senses code context, acts by suggesting completions, and receives feedback through user acceptance or rejection. An AI content moderation system senses user content, acts by flagging or removing, and receives feedback through appeals and downstream metrics. An AI recommendation engine senses user behavior, acts by showing content, and receives feedback through engagement signals.

The Control Loop Hierarchy

AI products typically operate within multiple nested control loops. The immediate loop occurs when the AI produces an output, the user responds, and the AI adapts. The session loop builds context and refines behavior over a user session. The product loop enables learning and improvement (or degradation) across all users. The business loop affects business outcomes, which shape priorities for AI development. Failure can occur at any level. A system that works well in the immediate loop may fail at the session loop if context management is poor. A system that works at all internal loops may fail at the business loop if it optimizes for the wrong objectives.

Feedback Loop Design Principles

Well-designed feedback loops are essential for AI product reliability and improvement. Understanding control theory helps you design loops that converge toward goals rather than oscillating or diverging.

Here are key principles for designing effective feedback loops:

The Feedback Quality Framework

Feedback Must Be Timely

Feedback that comes too late is useless for learning. A spam filter that tells you a message was spam three weeks after you received it provides no useful signal. Design feedback loops that close quickly relative to the rate of environment change.

Feedback Must Be Honest

Biased feedback creates biased systems. If users systematically under-report certain types of problems (perhaps because they have given up or learned to live with them), the AI will not learn about those failure modes.

Feedback Must Be Sufficient

Feedback should provide enough information for the system to distinguish between good and bad outcomes. Binary thumbs up/down is often insufficient; understanding why an output was rejected is more valuable.

Feedback Must Be Sustainable

Asking for too much feedback exhausts users and degrades signal quality. Balance feedback needs against user burden. Sometimes automated proxies for user satisfaction are more sustainable than explicit feedback.

5

Principle: Interfaces Are Control Systems

The interface is where control loops are instantiated. Every design choice affects how observations are gathered, how actions are taken, and how feedback is provided. Interface design is control system design.

Stability and Oscillation

Control theory teaches us about stability: a system is stable if it converges toward a goal, and unstable if it oscillates or diverges. AI products can exhibit both behaviors.

Understanding oscillation failure modes helps you design systems that maintain stable behavior over time.

Oscillation Failure Modes

AI systems can get into loops where user-AI drift occurs when users gradually change behavior in response to AI suggestions, which changes what the AI suggests, which further changes user behavior. Filter bubbles form when recommendations narrow user preferences, which narrows recommendations further, creating echo chambers. Gaming cycles emerge when users learn to game AI systems for favorable outputs, AI adapts to game-playing, and users refine their gaming.

Worked Example: AI Content Moderation Oscillation

Consider an AI content moderation system that learns from user reports. Initially, it removes clearly violating content. Bad actors learn to make violations slightly different from what the AI has learned. The AI adapts to these new patterns. Bad actors adapt again. This creates an oscillation where AI improves at detecting obvious violations, bad actors shift to subtler violations, AI has less training signal for subtler violations, AI performance degrades on subtler cases, and the cycle repeats at higher sophistication levels. Solutions include human review of edge cases, diverse training signals, and deliberately introducing variety in training data.