| 8 days ago
At LawPanel we have an interest in AI. Both in what it can usefully do now in applications we build. But also what it might enable in legal services that could change the ‘how’ and the ‘by whom or what.’
There’s been a couple of interesting articles in recent months on the ‘black box’ nature of AI. The NYT https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html Opined … to be followed by Economist saying that before AI is widely adopted it must explain itself. Like some self-replicating form, these articles then spawned others. So here I give life to another, but somewhat contrarian view.
These and the articles I’ve seen take a curate’s egg view of AI. Yes it’s all very nice this AI coming over here and taking the jobs that are mundane and repetitive, but the moment it begins to make decisions that really matter, such as those with social impact it’s a different matter. That’s when the bien pensant (who would normally make them, no defensive reaction there!) demand the AI explain itself. And to stand up straight with socks pulled up whilst doing so.
AI being a black box of machine learning, which is simply automated statistics but with not much that is simple, there is no underlying reason, rationale or algorithm. Much, I would say, as is the case with how we humans reason, except AI is much better at being consistent. People on the other hand are masters at self-deceiving story-telling, which we (self-deceivingly) tell ourselves is in fact post-hoc rationalisation. A sort of internal socratic discourse where clarity of thought and motive wins out. But is in fact just the thinking person’s self-deception.
For much as we would like to think of ourselves as coolly detached, rational analysts, carefully dissecting and weighing the evidence before coming to an entirely cerebral conclusion, all the evidence is that we’re all heaving beds of unconscious and not so unconscious biases. Our emotions and irrationalities charge about like a crazed bull elephant, whilst our rational self is a flea-sized mahout who perched up top makes a pretence of control but is really just along for the ride. We pick the evidence that supports our pre-existing view, and dismiss that which is contrary to it. In politics much of this is driven by tribalism and group identity.
Indeed our self-deception goes further. We’re such good story tellers to ourselves that having made our initial impulsive selection, we then backfill with all calculations skewed to give that answer. Post-hoc rationalisation is one of man’s finest achievements on display in all areas of life, both high and low, professional and domestic.
Indeed it t takes a certain hubris or intellectual vanity or even under appreciation of the biases and inconsistencies that riddle human decision making at all levels to think outcomes come second to the virtue contained in the decision making. An hour or two as a guinea pig in a few behavioural economist experiments should be required activity. Or the reading of Thinking Fast and Slow by nobel laureate Daniel Kahneman and Amos Tversky, from whom I now realise I’ve probably taken most of the idea in this piece, having initially thought they were mine.
So rather than AI being required to explain itself maybe we should first stop kidding ourselves about our own decisions making process. And AI should then be allowed to do its thing as what matters is not intentions, but outcomes. And with AI it is possible to run endless simulations with small incremental changes to each variable singly to then split test outcomes and check that these are inline with what the great and the good declare they should be.
But it could be that such views are the result of cognitive fallacies, that could be more easily removed or updated, than AI being required to explain itself. After all, what matters is not whether the intentions of an action are good, but the outcomes. Post hoc rationalisation or self-justification are of much less interest to the injured party, than not being injured in the first place. If AI can produce, first in simulations and then in the real world, quantitatively better outcomes, with those that are poor no worse than alternative (human) operators, why the need for the rationalisation.