AI , people, and the superficiality of rationality


 At LawPanel we have an interest in AI: both in what it can usefully do now in the applications we build, but also in what it might enable in the future.

 

There’s been a couple of interesting articles in recent months on the ‘black box’ nature of AI. The  New York Times opined that the black box of AI is ‘nothing to fear’, to be followed by the Economist saying that before AI is widely adopted it must explain itself. Like some self-replicating form, these articles then spawned others. So here I give life to another, but somewhat contrarian view.

 

These and the articles I’ve seen take a curate’s egg view of AI. Yes it’s all very nice this AI coming over here and taking the jobs that are mundane and repetitive, but the moment it begins to make decisions that really matter, such as those with social impact it’s a different matter. That’s when the bien pensant demand the AI explain itself. And to stand up straight with its socks pulled up whilst doing so.

 

AI being a black box of machine learning, which is simply automated statistics, there is no underlying reason or rational. Much, I would say, as is the case with how we humans think, except AI is much better at being consistent. People on the other hand are masters at self-deceiving story-telling; we have our own black-box of prejudice, instinct, and bias. Instead of admitting that, however, we tell ourselves that our conclusions are arrived at by a sort of internal Socratic dialogue, where clarity of thought and motive wins out.

 

As much as we would like to think of ourselves as coolly detached, rational analysts, carefully dissecting and weighing the evidence before coming to an entirely cerebral conclusion, all the evidence is that we’re all heaving beds of unconscious and not so unconscious biases. Our emotions and instincts charge about like a crazed bull elephant, whilst our rational self is a flea-sized mahout who, perched up top, makes a pretence of control but is really just along for the ride. We pick the evidence that supports our pre-existing view, and dismiss that which is contrary to it. In politics, much of this is driven by tribalism and group identity.

 

Indeed our self-deception goes further. We’re such good story tellers to ourselves that having made our initial impulsive selection, we then backfill with all calculations skewed to give that answer.  Post-hoc rationalisation is one of man’s finest achievements, and it’s on display in all areas of life, both professional and domestic.

 

Indeed it t takes a certain hubris or intellectual vanity or even under appreciation of the biases and inconsistencies that riddle human decision making at all levels to think outcomes come second to the virtue contained in the decision making. An hour or two as a guinea pig in a few behavioural economist experiments should be required activity. Or the reading of Thinking Fast and Slow by nobel laureate Daniel Kahneman and Amos Tversky, from whom I now realise I’ve probably taken most of the idea in this piece, having initially thought they were mine.

 

So rather than AI being required to explain itself maybe we should first stop kidding ourselves  about our own decisions making process. And AI should then be allowed to do its thing as what matters are not intentions, but outcomes. And with AI it is possible to run endless simulations with small incremental changes to each variable singly to then split test outcomes and check that these are inline with what the great and the good declare they should be.

 

But it could be that such views are the result of cognitive fallacies, that could be more easily removed or updated, than AI being required to explain itself. After all, what matters is not whether the intentions of an action are good, but the outcomes. Post hoc rationalisation or self-justification are of much less interest to the injured party, than not being injured in the first place. If AI can produce, first in simulations and then in the real world, quantitatively better outcomes, with those that are poor no worse than alternative (human) operators, why the need for the rationalisation?

Popular posts from this blog

IP software and intensity of use

New WordPress plugin for online internationtrademark services

Is AI powered IP closer than we thought?