Part V

Evaluation, Reliability, and Governance

Making AI products work reliably in production

Part Overview

Part V focuses on making AI products work reliably. You will master evals, observability, guardrails, cost management, and compliance frameworks.

Interlock with Previous Part

What this part inherits from Part IV:

What this part changes retroactively:

Artifacts that now need updating:

Chapters in This Part

LLM-as-Judge, eval pipelines, eval-driven development.

Tracing, debugging AI failures, failure mode analysis.

Guardrails, circuit breakers, graceful degradation.

Cost optimization, latency management, unit economics.

NIST AI RMF, ISO 42001, EU AI Act, bias detection.

Bridge Notes

Earlier artifacts updated by this part:

Later chapters this part prepares for: