The Complexity Tax: Why We Build Models We'll Never Deploy

Originally published at Nomis Solutions

A neural network achieves 0.82 AUC. The logistic regression gets 0.72. The gap between them is what we call the complexity tax — the accuracy you sacrifice when you choose interpretability over raw predictive power. This article explains why we routinely build models we know will never hit production, and how doing so leads to better conversations with the C-suite about what to actually deploy.

Key Topics:

  • Why we build 300-400 model variants knowing most will be discarded
  • The five-stage workflow from complex exploration to interpretable production models
  • How a 20-minute conversation with the CEO, CFO, and heads of marketing and retail changed how a bank thought about its analytics investment
  • The evolving regulatory tolerance for model complexity
  • When the complexity tax is worth paying — and when it isn’t