,

Constraints make us better

Featured image for a Beta Tester Life book analysis of Inside the Box: How Constraints Make Us Better by David Epstein, showing a modern leader with strategy, constraint, focus, and outcome overlays.

On David Epstein’s Inside the Box, and why the most useful AI, delivery and governance work of the next two years will be done inside deliberate, well-shaped boxes. Constraints make us better.

Constraints make us better

Read or skip?

Read. If you’re running anything to do with AI adoption, platform engineering, governance or enterprise delivery, Epstein’s book gives you better language for an argument you’re already losing in meetings — that more freedom is not the same as more capability, and that the teams shipping the best work right now are the ones building the tightest boxes.

The Mendeleev problem

In 1869, Dmitri Mendeleev was not trying to discover a fundamental law of chemistry. He was trying to write an introductory textbook, and the publisher had given him a fixed page count. The constraint forced him to find a space-saving arrangement of the elements. That arrangement turned out to be the periodic table.

This is the move Epstein makes again and again, and it’s the move worth taking seriously. The story we tell ourselves about innovation — blue-sky thinking, removing barriers, more freedom — is, in his telling, mostly wrong. The breakthroughs came from people working inside boxes that were either accidentally well-shaped (Mendeleev’s page count, Keith Jarrett’s broken piano) or deliberately well-shaped (Pixar’s Three Pitches Rule, Dr. Seuss’s 225-word vocabulary, Monet refusing to use black).

It’s an old idea. Stravinsky said it. Orson Welles said it. What Epstein adds is a structured account of which constraints work, when, and why — and that’s where it gets interesting for anyone running technology delivery in 2026.

The General Magic problem (which is now the AI problem)

Chapter one of Inside the Box is about General Magic, the early-1990s startup that essentially designed the smartphone a decade before Apple. It had the engineers. It had the vision. It had Apple, Sony, Motorola and AT&T as partners. It had hundreds of millions of dollars. It had no deadline, no customer constraint, and no externally-imposed shape.

It died of featuritis. Endless capability, no coherent product, no shipping discipline. Brilliant people, runaway scope, and a roadmap that only ever expanded. Anyone who has watched an AI team in 2025–26 should feel the hairs on the back of their neck stand up reading this chapter.

The default state of a generously-funded technical team without constraints is not innovation. It is featuritis.

Replace “1992 Silicon Valley” with “2026 enterprise AI programme” and the failure mode is identical. Foundation model access? Yes. Compute budget? Effectively unlimited. Stakeholder ask? “Show us what’s possible.” Six months later: 14 demos, three internal pilots, zero production deployments, and a deck explaining why the next phase needs more money.

The teams shipping useful AI — the ones I’ve actually seen produce repeatable results in regulated environments — are not the ones with the most freedom. They are the ones who picked one workflow, defined the failure modes they would not accept, fixed the latency and cost ceiling, and refused to expand scope until the first thing worked. That is Pixar Planning, applied to LLMs.

Paired constraints, applied to delivery

Epstein leans heavily on the work of psychologist Patricia Stokes, who studied creative breakthroughs and noticed a pattern. The artists, scientists and inventors who broke through tended to use paired constraints: one rule that precludes their familiar approach, and one rule that promotes a specific new one. Monet precluded black paint and promoted broken brushwork. Woolf precluded conventional plot and promoted interior monologue.

This is a useful structure for delivery and platform teams trying to break out of patterns that aren’t working. Two examples I’ve watched land in real organisations:

  • Architecture review that won’t scale. Preclude: no synchronous review meetings. Promote: every change ships behind a feature flag with a written rollback plan. The bottleneck moves from a committee to a document, and the document is faster.
  • AI procurement theatre. Preclude: no proofs-of-concept without a named production owner. Promote: every pilot must define the metric and threshold at which it gets killed. Suddenly the pipeline of zombie pilots stops growing.

Notice what these have in common. They don’t add freedom. They take it away in a specific direction, and add it back in another. That’s the move. Most “innovation programmes” add freedom in every direction at once and are then surprised when nothing ships.

Goldratt is back, and he’s about AI now

Eli Goldratt’s Theory of Constraints is one of the older ideas in the book and one of the most quietly relevant. Every system has a single bottleneck. Optimising anywhere else is theatre. The job of leadership is to find the drum and widen it.

In 2026, the bottleneck of most AI initiatives is not model capability. It hasn’t been for a year. The bottleneck is one of: evaluation infrastructure, data access governance, the ability of a human reviewer to look at output fast enough, or — most often — the willingness of a single risk function to sign something off. Throwing more compute, more models, or more engineers at the problem doesn’t move the needle, because none of those is the drum.

If your AI roadmap looks like a list of capabilities rather than a list of bottlenecks being widened, it isn’t a roadmap. It’s a wishlist.

Universal Design and the AI guardrails problem

Chapter seven is, on the face of it, about US Army body armour. The Army kept adding plates until soldiers couldn’t move — Epstein calls it the Christmas tree effect. When they finally redesigned the vest around the constraints of female soldiers, they accidentally produced a lighter, modular vest that was better for everyone, including the largest men in the platoon.

This is exactly the argument for designing AI systems around their hardest users — the regulated, the audited, the high-stakes — rather than the easiest. If your governance, evaluation and rollback story works for a clinical workflow or a financial advice workflow, it works for the marketing chatbot. The reverse is not true, and most enterprises are still building the reverse.

It’s also the argument for treating governance as a design constraint, not a tax. Teams that view compliance as friction try to remove it and end up with the General Magic problem. Teams that view compliance as the shape of the box tend to ship things that survive contact with the auditor.

Where the book is weakest

Honest limitations, because uncritical praise is its own kind of failure mode.

  • Survivorship bias. We hear from constrained projects that worked. The graveyard of starved, deadlined, under-resourced projects that simply died is much larger and goes uncounted.
  • Constraints assume competence. A skilled team plus tight constraints produces breakthroughs. An unskilled team plus tight constraints produces a missed deadline. The book is much better on the first case than the second.
  • Light on platform engineering. Most examples are products, art, and individual creators. The book is thin on what constraints look like in a 200-engineer platform org with five years of legacy.

So what, on Monday morning

Three things worth trying this week, drawn from the book and tested in real delivery contexts:

  • Write down the hypothesis before you start. Not the goal — the hypothesis. What do you expect to see, and at what threshold do you stop? Epstein calls the alternative HARKing — Hypothesising After Results are Known — and once you see it, you see it everywhere.
  • Find your drum. Spend an hour mapping the actual flow of any initiative you care about. Not the org chart. The flow. The single slowest step is your bottleneck. Anything you’re doing that isn’t widening it is decoration.
  • Use a paired constraint to break a stuck pattern. Preclude one familiar move; promote one unfamiliar one. Make it specific enough that you can tell at the end of the week whether you held the line.

Constraints aren’t the opposite of capability. They’re the shape of it. The teams that internalise that, in this AI cycle, will ship. The ones still waiting for the constraints to be removed will spend another year not shipping, and call it strategy.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *