Why Artificial Intelligence is the answer to the greatest threat of 2017, cyber-hacking

Protecting ourselves raises an interesting dilemma. What level of monitoring and activity reporting are you prepared to put up with to enable more accurate or earlier collaborative identification of malice? 

John Clark
Monday 09 January 2017 11:00 GMT
Comments
Cyber hacking has caused problems for various companies and customers
Cyber hacking has caused problems for various companies and customers (Jay Wennington / Unsplash)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Our lives are now heavily mediated by digital technology (music streaming, social media, e-banking etc). We are increasingly and often continuously online, open to engagement in a myriad of services and simultaneously open to cyberattack.

2016 saw further high profile and financially driven security incidents, such as Tesco and TalkTalk, together with one of the highest profile attacks ever – the apparent compromise of the Democratic party’s information systems with potential influence on the US Presidential Election. We now need to defend against the lone wolf hacker, organised crime and terrorism, and nation states with well-funded advanced capabilities.

The 2016 cyber message is clear – we have a big problem, it’s going to get worse, and we need help.

Artificial Intelligence (AI) is a promising source of such help. It comprises theory and techniques that enable intelligent processing of information. It underpins many current robotics and other smart systems (e.g. driverless cars) but its current dominant application area is the analysis of large and complex data repositories (usually referred to as Big Data analytics).

The intelligence of AI is often interpreted as mirroring human capabilities, but the scale of data potentially relevant for security purposes typically places analysis well beyond human capabilities. Internet traffic, for example, is predicted by networking giant Cisco to reach several zetabytes (billion trillion bytes) by 2019. AI is needed to make sense of data at (and well below) these scales and cyber defence has little option but to make significant use of it.

An ATM transaction in Sydney at 10am followed by one in London 15 minutes later, or a rapid series of contactless payments at previously unvisited outlets, might legitimately arouse human security experts’ suspicion. Big Data analytics now provides us with an array of AI techniques to provide such characterisations of abuse with increased sophistication. Like humans, they typically judge suspiciousness based on what they have seen before. Like humans, they can confuse what they see as "odd" with criminal or malicious behaviour,and when you have millions of customers there is plenty of scope for odd behaviour. Dealing with this will remain a major challenge since blocking activity erroneously causes irritation all round and eats up human resources to resolve matters.

Cyber attacks will continue to increase in sophistication. For example, so-called metamorphic viruses change their form as they spread. Traditional malware detection software works by searching for specific and recognisable elements of code (digital PhotoFits, if you like). However, if malware redesigns itself constantly as it spreads this simply doesn’t work. In such cases detection must rely on what the malware actually does rather than what it looks like and AI will be brought to bear to rapidly characterise it.

AI will also help us to track down who is responsible for attacks, identifying what further information is needed to draw conclusions and then asking for it, with automated investigative algorithms following their AI-enhanced noses, making best use of limited resources.

Protecting ourselves raises an interesting dilemma. What level of monitoring and activity reporting are you prepared to put up with to enable more accurate or earlier collaborative identification of malice? The privacy versus security issue is not new (witness the furious Ed Snowden debate) but this doesn’t just apply to state monitoring. We will see increasing efforts to square the circle here, providing more effective security whilst supporting privacy.

We will see AI emerging as a major and a powerful tool in both the detection and investigation of malice and in the construction of systems resilient to attack. But what’s sauce for the goose is sauce for the gander. Cyberhackers can use AI too and so the cyber-arms war will continue. We will also need to deal with that.

Prof John Clark is a computer scientist and is the recently appointed Chair of Computer and Information Systems at the University of Sheffield

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in