AI and ethics
Posted on 31st May 2018
Have you heard of the Turing Test? No, I hadn’t either, but it came up in conversation recently. Named after the eminent English computer scientist, Alan Turing, it asks a simple question: “Can machines think?” and from this “can a machine exhibit intelligent behaviour that is indistinguishable from that of a human being?”
Interestingly, in the last few weeks it appears that the answer to this last question is now seemingly “yes.” Google “Turing Test” and you’ll get a couple of references to articles on the theme “Did Google’s Duplex just pass the Turing Test?” This has been extensively reported: essentially it was a robot/computer making an appointment with a hair stylist, but by all accounts it was stunningly difficult to tell which party was the human and which robotic.
Now that’s pretty cool and certainly a step on from Alexa. A world where I can ask an AI assistant to order my social life would certainly make life a bit easier (although if I had a PA it would not be good news for him or her). However, it only takes a few minutes of (human) thinking to realise the potential pitfalls. The ethical dimension to AI is vast. A recent article in The Times made the valid point that every discipline has had to confront the ethical challenges resulting from scientific advancement. The irony that the inventor of dynamite gave his name to the most prestigious international peace prize should not be lost upon us.
We will see the impact of the GDPR legislation over the coming months, but underpinning it is a desire to give control back to the individual. The general consensus is that data must be controlled by humans, not the other way around. To that end the House of Lords report on AI in the UK makes salutary reading. It proposes five principles for an AI code of ethics: 1) AI should be developed for the common good of humanity, 2) AI should operate on principles of intelligibility and fairness, 3) AI should not be used to diminish the data rights of individuals, families or communities, 4) All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI, and 5) The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.
Those last two principles are obviously aimed at (hopefully) preventing technological unemployment and preventing computers from doing us harm. They were foreshadowed by Asimov’s three Laws of Robotics, but can you honestly saw you think that they are going to be realised in the years to come?
Spending on cognitive systems is expected to grow 54% in 2018 compared to 2017. Much of this will be in China, a country where, as we have written about in a previous blog, citizens’ behaviour is already being monitored by smart technology. Other estimates suggest that by 2020 (only two years away), AI will have created around 2.3 million new jobs across the world. That sounds great until you read that it may also eliminate 1.8 million jobs at the same time.
Nobel invented dynamite: physics produced the nuclear bombs that swiftly ended the Second World War but now have the potential to destroy our species; human biology produced eugenics, medicine thalidomide and now AI will produce – what? If we get the ethics wrong then it doesn’t matter whether your virtual assistant has got you an appointment at the hairdressers…
Nikola Kelly, MD, Be-IT
Posted in News, Opinion
.. Back to Blog