AI – Limitless Possibilities? Or are Legal Limits Necessary?

Games and Grand Masters

Artificial intelligence (AI) has been identified as a long time as a goal of the software industry. We know Microsoft, IBM, Google, and Apple are investigating AI but Facebook, Amazon and a raft of other well known technology companies currently have active projects as well.

fsbraun / Pixabay

With the Google AlphaGo system defeating the world Go champion it joins a list of computers that are known to have defeated world champions and grand masters for a whole raft of games, yet there is a big difference between winning board games and being ready to communicate with the world at large, indeed it could be argued that the failure of Microsoft’s Tay-AI demonstrates just how far the machine has to go in order to communicate on an equal footing with people. AlphaGo’s success may be a great achievement, and this success came well ahead of schedule with pundits predicting that it would take at least another ten years for the computer to succeed at this particular game.

This illustrates the ability of the “Three Digital Accelerators” – the exponential advance of processing ability (as defined by Moore’s law), increasing bandwidth capabilities, and a fundamental demand for storage as a part of the learning process. The AlphaGo computer was able to both teach and challenge itself to play better, yet stay within the defined rules of the game.

Learn from Challenges

IBM’s Watson is a cognitive computer that is built to learn from the challenges it faces, but again arguably all it takes to defeat Joepardy’s two greatest champions is access to a superior general knowledge database, yet IBM has used Watson on other projects that are required to identify innovative solutions, it has grown it’s natural language capabilities and is known to identify and evaluate hypothesis, and arguably demonstrate natural learning capabilities. Watson is at the centre of many of IBM’s projects at the current time, yet one of the greatest challenges demonstrated here has been the ability to store the necessary information for the millions of requests the machine receives per day for all these projects, IBM is also intending to expand the image processing capabilities. These are being put to use for the benefit of Big Blue’s corporate clients, yet many of the applications are arguably powerful Business Intelligence solutions, where humans still make the final decisions.
KPMG recently announced they are forming an alliance with IBM’s Watson unit to develop high-tech tools for auditing supports this thesis. Fields like auditing and accounting are about developing advanced Business Intelligence capabilities and allowing them to transform their audit and tax practices to gain a huge advantage over other audit firms, but in reality it is simply a case of using smart analysis to assess the capability of any business, a smarter BI tool, that learns from past errors, including the end results of past tax audits. Arguably having an AI point out the most powerful forecasts for any future business activity is about taking information that was readily available through BI systems and presenting it in a different way, but it is nothing more than smart use of BI technology.

Societal Laws

In the 1940s Isaac Asimov was one of many Science Fiction writers that were developing stories involving the development of robotics, machines that looked and acted like humans, the only difference being that if you tapped one somewhere on it’s body you would hear a resonating echo of metallic vibrations. Asimov stood out from his fellow writers in defining the three laws of robotics and the machines in his stories.
3 laws of roboticsProfessor Richard Susskind, author, speaker, and adviser to major professional firms and to national governments on the way computers will impact the legal profession says lawyers must  “consider the various ways advanced computing could change their profession in coming years”. He is not considering the impact of AI on society at large, just the legal profession, although he does question whether your next lawyer could be a machine, the reality is that web services, like WebMD, may provide us with medical advice, but you should still consult a doctor for real treatment, the same is true for law, certain things can be done by machine, such as filing documents but the role of the professional is crucial in many situations, such as arguing a court case.
“A robot may not injure a human being or, through inaction, allow a human being to come to harm” ~ Isaac Asimov, the first law of Robotics – 1942.
Asimov shared a vision with Elon Musk, Founder of SpaceX and Tesla Motors and that is that there must be rules to control how machines function in society, they may envision different machines but the effects are the same, we need new laws to protect the role humans play in our society and to ensure that all Artificial Intelligence systems obey these rules. In later books Asimov penned an additional law, known as the zeroth law because it preceded all others:
“A robot may not harm humanity, or, by inaction, allow humanity to come to harm”
It’s the ability for smart machines to harm human development that concerns some of the industry thinkers.

The Problem of Automation

According to MIT Technology Review “automated vehicles and planes are being designed to drive and fly more safely than human operators ever can,” concluding that we will be safer using these vehicles, but this ex-programmer knows all too well that things do go wrong even with the most complex of programs. There have been several air crashes caused by faulty flight systems, yes we have learned from these disasters and alterations to these programs should ensure each of those faults never occur again. It is necessary to consider though that people have an amazing desire to survive and have at times pulled of the impossible and survived whereas all the machine has is a set of logic to follow, no intuition, no matter how good the programming logic it can come to an end.
Ulrike Barthelmess and Ulrich Furbach of the University of Koblenz, Germany argue that we humans suffer a deep rooted fear of machines because of robotic horror stories and the development of a techno-phobia in society, rather like that of the Luddites of 18th Century England who wanted to destroy the machines of the industrial revolution. Yet today we have less to fear of robots, as they have specific programmed tasks (which they perform very very well, day in, day out, for example as a part of a production line, lifting car parts into place) and are nothing like those humanoid machines envisioned by the Sci-Fi writer of Asimov’s era.
Artificial Intelligence is arguably somewhat different and other rules apply because a machine is expected to interact with people in the same way that another human being does, applying higher thinking and logic to problems, breaking them down and reaching an answer. Simply because a machine has rationalised a solution to a problem does not mean we are duty bound to accept it, this could be one of many proposed approaches. The AI should, arguably be, just another expert offering advice and in truth expert opinion is frequently rejected. That said human society is deserving of legal and programmatic protection to ensure it’s continued existence before the dawn of the first true AI.

Other Writing by Peter Giblett:

Peter Giblett writes in a number of places, including Wikinut, Blasting News, and has his own writing blog called GobbledeGoox.

2 thoughts on “AI – Limitless Possibilities? Or are Legal Limits Necessary?

  1. Excellent first post. My son and I were just discussing Tesla’s autopilot. He watched a video of a guy who appeared to be sleeping in the driver’s seat. Some are pushing it too far. Autopilot isn’t synonymous with autonomous driving. I don’t fear technology. I fear those who misuse it.

Leave a Reply

Your email address will not be published. Required fields are marked *