Technology turns bad – cyber, AI and the future of war

Technology turns bad – cyber, AI and the future of war

Technology turns bad – cyber, AI and the future of war

Posted on 16th October 2019

LinkedIn ShareShare
More

A few weeks ago, the Security Editor of Computer Weekly published an article on the subject of AI being used for malevolent purposes, specifically cyber attacks and warfare.

Within a few days of that article being published, the attack on the Saudi oil facilities ravaged, for a short period, that country’s principal industry, leading to an immediate increase in the price of oil that will be felt across the world.

Leaving on one side the rational, indeed normal, human instinct that makes us question how anyone can want to destroy life and property and cause danger and damage to other countries both economically and politically (because some humans are evil may be the obvious, if not very PC, answer), the well-documented moves to replace human soldiery with robots and to allow war machines to develop through AI is, in many respects, worrying, if only because a robot does not have any human feelings or indeed empathy with sentient beings.  Forget the scene in which James Bond keeps the baddies talking while he finds a means of escape: he’ll be dead in five seconds and the films will come to an end.

StormtroopersThe Computer Weekly article suggests that the Cybermen (or indeed Cyberwoman in these enlightened times – although personally I don’t really care what gender of thing is trying to kill me), “operating autonomously from human oversight” are “basically inevitable.”  Trend Micro’s vice-president of security research, Rik Ferguson says that “it would be foolish to think that malicious actors are not also using it (AI/ML) to exploit illicitly acquired personal or corporate data,” and Robert McCardle, director of Trend Micro’s Forward Looking Threat Research team, is quoted as saying that “responding to these new attacks would be a challenge because cyber security and law enforcement agencies were both handicapped to some extent because they have to ‘move at the speed of the law’ (and) if you want to see an exercise in how the as-a-service model works, look to cyber criminals - They’ve got trust models, escrow, everything. And they move faster than the defenders. The defenders are handcuffed and the people who should be wearing the handcuffs have no problem.”

Finally, to return to the strike on Saudi Arabia’s oil facilities, the way in which it was planned and conducted is light years away from how this kind of action used to be done back in the Second World War.   Detailed reconnaissance and then analysis of the data produced led to the attack which was, in its own terms, risk-free for the perpetrators.  Yes, a worse-case response could have been for the US and Saudi Arabia to declare war on Iran, but the calculation by those behind the attack was that the politics around this seemed to be unlikely to swing behind such a radical step – and they have been proved correct.  But if you really want to depress yourself, read this article published 100 years after the end of the First World War, which explains how warfare is changing, including a description of AI being applied to make military drones able to work both individually and collectively in a swarm, exhibiting what the US Department of Defence calls “advanced swarm behaviours such as collective decision-making, adaptive formation flying, and self-healing.”  These things are indeed “basically inevitable” – because we – both the good guys and the bad guys - are already well down the road to making them work.

Freddie Kydd, Be-IT

Posted in News, Opinion


.. Back to Blog

Be-IT Accreditations