This is an extract from an editorial written by our CEO Mark Kingsley-Williams for the Trademark Lawyer – you can read the full article here
Hardly a day seems to pass without a new report prophesying AI delivered doom for all types of occupations. Journalists foretell (some with ill-concealed glee) of how their friends who went into law and medicine will face a technological tsunami of the sort that has already crashed over them thanks to the internet.
After all, studies show algorithms are better than medics at finding tumours in radiography images. And not only do they find tumours that medics miss, but unlike medics the algorithms do not give diametrically opposite opinions when shown the exact same image on different occasions. Jeremy Bentham, the 18th century jurist and philosopher was right when he observed ‘The rarest of all human qualities is consistency.” If you want consistency choose the AI.
And yet throughout the last 300 years of rapid technological change there have been many predictions of new technologies having massive societal impact that simply did not come to pass.
There was the promise of flying cars. Small scale nuclear fusion were going to power everything from the home and domestic transport up. Home windmills were said to be the future of sustainable living. Domestic chores and care for the sick would soon be outsourced to obedient, friendly and tireless robots.
But, as PayPal founder Peter Thiel observed, instead of flying cars we got Twitter and 140 characters. Well, 280 now.
Predictions for AI will also, I predict, fall well short in many domains. Legal advice and attorney work will be one of them and for good reasons.
The closest examples to AI and its impact in recent times would be spreadsheets. These replaced data entry clerks, but not accountants and analysts. And with more data being captured and processed new jobs were created in data analytics and forecasting. Since ATMs/cash machines were introduced the number of bank staff increased as more branches opened and more financial products were sold.
It would be wrong to automatically conclude that just because lower skilled jobs were lost before and more knowledgeable ones remained, this is the only outcome from AI. Instead it’s necessary to get beyond broad superficial generalisations on AI; we need to understand its strengths but also its manifest limitations.
What we have now might best be described as narrow AI. Within tight domains with minimal variance it can exceed human performance. In natural language processing Google’s systems are as good as humans at listening to text and transcribing it; this allows for the 7 billion videos on Youtube to be subtitled. Or for Everlaw’s systems to help lawyers prepare for trial, by analysing text and other media to summarise documents and identify which may be the most relevant (after case specific training).
But narrow AI which can assist with a specific task is wholly separate from something called general AI. A narrow AI can assist with one task but cannot then be used for another. Attorneys have a sort of intelligence akin to general AI, which takes knowledge from one domain and transfers it to another unrelated domain. There is currently no research agenda for general AI, let alone the Super AI feared by Elon Musk, which would be orders of magnitude smarter than professionals.
At the moment AI has none of the real world knowledge which attorneys have, which for mere mortals is estimated at 10-100 terabytes. Attorneys have a great deal of domain and general knowledge laid on top of general cognitive functions. As well as knowing much more about much more, attorneys (mostly) know also what they don’t know. This keeps them within their domain, or if the attorneys cant stay within their domain to obfuscate in a convincing or charming way. AI cannot.
Could this change? Those who say AI will plough through ranks of attorneys like a combine harvester tend not to have direct experience of developing applications. Instead, they extrapolate what they see as recent rapid progress into the future at a similar rate of change.
This is incorrect for many reasons.
Machine Learning is predominant in AI and is where most research has been active. But it only works on repetitive tasks in very tightly defined domains. At LawPanel we have painful experience of this from developing AILA the world’s first ChatBot trademark assistant. With a chat interface and responding to natural language questions, AILA can answer questions on such matters as the differences and benefits of word marks and figurative marks; distinctiveness and descriptiveness; the costs of filing in different registries; the classes needed for different goods and services; and will even run an initial clearance search for users, giving the answer ‘Probably available’ and ‘Probably not available’, before going on to take instruction online or taking contact details.
We use Microsoft’s Natural Language processing framework, and like all AI built on Machine Learning we went through a long process of training on sets of questions and answers. But what these frameworks lacked was the capability to give AILA a sense of context. That is, if someone asks her about cost for filing in the UK, and then to do a search, AILA would have to ask what registry to search.
An AI with the memory of a goldfish is neither much use nor a threat to humankind’s dominant place on earth. Thomas Brattli, LawPanel’s CTO eventually came up with one of his nifty workarounds and built a context holding framework. So Aila no longer embarases herself asking the same question within seconds of the first.
The advocates of rapid AI progress point to recent apparent improvements. But these have come more from more abundant processing power. Machine Learning can be very effective at ploughing through 10,000 mortgage applications based on the same form and with tick box or multiple choice answers. But this is an incredibly tight and controlled domain. Give the same technology different forms but in the same subject domain of mortgage applications, and it will struggle or fail completely.
To put this into a broader context the Electronic Frontier Foundation (EFF.Org) runs a periodic scorecard of AI tasks. At the moment the best AI is unable to do any of these tasks with anything like the reliability we would expect of a seven year old
Look at a picture and answer arbitery questions about it
Read wikipedia to answer questions
Answer simple science test questions
Some questions, such as ‘is the umbrella the right way up’ in a picture can be answered correctly by any four year old. No AI with general training has ever succeeded.
And there is no prospect of significant improvement. Most Machine Learning relies on a technique called back propagation. Its roots can be traced back to the 60’s and 70’s, with the main foundations for its use today being laid in the ‘80s.
One of those who began to get useful results then was Geoffrey Hinton. He was subsequently one of the main advocates of backpropagation in Machine Learning during its wilderness years. But he now says it’s probably a dead end, and what is needed is a reboot and fresh start ‘”My view is throw it all away and start again,”’. The big concern amongst the researchers and innovators in AI is that neural networks don’t seem to learn like we do: “I don’t think it’s how the brain works. We clearly don’t need all the labeled data.” observes Hinton.
So extrapolating from recent progress into the future could well be flawed. The problem is what computer scientists call an optimisation problem with a local optimum; progress can be made, but the best that can be achieved within the locality falls well short of what is required. Instead what’s needed is exploration of many other AI techniques, some that may not have yet been started, in order to find the global optimum.
Having seen the success of DeepMind’s AlphaGo project, where AI beat the worlds best player of Go, some believe the reinforcement technique is the most promising way to reach AI. So instead of learning rules and strategies from seeing how games were played by others (as used in many chess programmes) AlphaGo learnt from trial and error. It played games and saw what worked and rejected what didn’t. Intriguingly it came up with new and unconventional tactics that bewildered the top players.
But in many domains there is no step-by-step process and scoring, no overarching strategic objective to be reached. And even though Go has 300 times as many moves as Chess (and at 10^120 there are more possible Chess games than the 10^80 atoms in the observable universe!) it is still a comparatively tight domain with a simple rule set and clear and obvious scoring. Also any optimisation is, within the field of Go, going to be global rather than local. Improvements will almost always be cumulative, and layering AI techniques one on another will lead to gains equal to the parts summed. This is not expected to be the case in general AI.
So what will AI do in Trademark law?
We expect to see AI powered assistants in the next few years assisting with research and routine information delivery in tightly defined areas of trademark law. The key is the multiplier effect, where having a number of such AI assistants will increase attorney productivity, with less of the repetitive and robotic work needing the attorney. So firms may need a few less paralegals, just as previously they found they needed fewer typists.
Formulation of trademark advice will, though, remain the preserve of attorneys for at least a generation. With no consensus on the areas of research that might lead to general AI, let alone an intensive research effort to get there, twenty or thirty years is probably the minimum.
So can attorneys sleep easy? Yes and no. The most imminent threat to them is not AI. But we believe they will need to adapt and change, as existing technologies and changing client expectations from the ‘always available’ economy change the legal landscape.
But that’s an even bigger topic, for another time.