Blackbox Hot! Page

It works. It works terrifyingly well. But it is mute.

Ironically, we call this device the "black box" (it’s actually bright orange). It is the ultimate witness. It swallows a storm of inputs—airspeed, altitude, button presses, screams—and produces a perfectly linear story of cause and effect. blackbox

You can’t depose a neural network. It has no intent. It has no memory. It is a mathematical hallucination. It works

We are now in a position where we must trust the oracle, but we are forbidden from looking behind the curtain. Historically, enlightenment thinkers believed that explanation preceded trust . We believed the sun would rise because Newton explained gravity. We believed a surgeon was competent because we saw their diploma. Ironically, we call this device the "black box"

To survive this, we need a new discipline: . Instead of opening the black box (which is mathematically impossible for deep networks), we build second models that act as interpreters. We ask the black box to highlight the pixels it was looking at. We force it to provide a "reason" after the fact, even if that reason is just a simulation.

The new black box is the fire. And it is smiling, waiting for us to ask it a question we desperately wish we hadn't.

In 2016, ProPublica investigated an algorithm called COMPAS, used in US courts to predict recidivism. The black box returned a "risk score." ProPublica found it was twice as likely to falsely label Black defendants as future criminals than white defendants. The company that made the algorithm denied the bias. Because the box was black, both sides could claim the math supported them.