“CSI: Algorithmic Justice” will be really boring but maybe important

Hearings are being held by the Senate Permanent Subcommittee, led by Senator Carl Levin (D., Mich), to explore whether Computer-Driven trading and “Conflicts of Interest” are “eroding investors” confidence in the stock market. (sarcastic comment goes here)

The pull quote in today’s WSJ piece by Scott Patterson (“Venues That Pay Get Orders: Broker“) caught my attention: under the photo of Flash Boy-famous Brad Katsuyama on C1, Senator Levin is quoted:

“We’ve got to rid our market of conflicts of interest [COI] to the extent that it’s humanly possible.”

the piece ends with Katsuyama:

“Disclosure and transparency will help people make the right decision” about how they trade, he said. “Right now a lot of it is opaque.”

But this isn’t about humans or people. High-Frequency-Trading fueled boom/busts are the logical (if to a human, occasionally irrational) results of algorithms executing their instructions with ruthless precision at speeds faster than ‘humanly possible‘. Software’s conquest of Wall Street has reduced reliance on slow, error-prone, occasionally-moral human cognition to drive profits: the algorithms (self-correcting, self-optimizing) are out there, working for the “haves”, dimly visible only when they cause a trillion dollars to disappear. Logically, of course.

clement valla
Google’s Algorithms bring the merger of satellite images and topographic maps to their logical conclusion in Clement Valla’s “Postcards from Google Earth” series

Does the Permanent Subcommittee hope to hold code accountable for its actions? Perhaps you could bust the creators on the first rev of the software, but assuming a few generations of self-optimization, will the code’s functional autonomy insulate its creators from culpability? Holy legal fees, Batman.

PS: doesn’t the human-driven financial industry have a history of growing rich exploiting holes in systems? If algorithms are simply faster to those holes, but operate within the “rules”, can you blame them? and who (or what) will go after them? And what will we do if we “catch” them? Next up, a really boring “CSI:Algorithmic Justice”?

Engineering morality – and human values – into code is the debate behind autonomous cars deciding who’s life matters more, thermostats spying on you and financial algorithms causing trillions to evaporate. This ain’t going away, as we rely on software and algorithms – known and unknown – in more and more of our lives.

But here’s the thought experiment: Will an algorithm ever be prosecuted? What will the statute of limitations say about code that self-optimizes itself into an entirely new form nano-seconds after a ‘crime’? and what will the sentence be?