The Independent’s journalism is supported by our readers. When you purchase through links on our site, we may earn commission. Why trust us?

How AI is revolutionising the landscape of cybersecurity

Fact-checked by Amy Reeves

Every year, there are around 800,000 cyberattacks – equating to almost 2,200 attacks per day. In other words, a cyberattack occurs every 39 seconds.

That’s a lot of work for human fraud teams to deal with.

Imagine a football goal that’s 100 metres wide opposite a legion of the best strikers in the world and guarded by a single goalkeeper. You’d expect quite a few goals to go in.

However, artificial intelligence (AI) is already changing the cybersecurity game for businesses and individuals. Picture thousands of robotic goalkeepers, all equipped with an intimate, categorical knowledge of each striker’s strengths, weaknesses and shooting feet – goalkeepers with a perfectly calibrated understanding of every ball’s flight and trajectory and capable of learning after each ball kicked, shot saved and goal scored.

That’s AI in cybersecurity – so it’s no surprise that over half of all business leaders are already harnessing AI to safeguard them from the countless cyber threats out there. But how?

Below, we look at AI’s burgeoning role in the contemporary cybersecurity landscape. We’ll explore:

  • What AI in cybersecurity is and how it works
  • How AI predicts and acts on threats and the role of automation
  • How AI is reshaping the way businesses detect phishing attacks and authenticate their customers
  • How fraudsters can repurpose AI’s benevolent qualities for illegal purposes 
  • The broader challenges and considerations of AI-enabled cybersecurity
icons8-ai-100

AI in cybersecurity refers to the way in which AI-driven technology (including advanced machine learning algorithms and computational models) can help defend against cyberthreats.

In cybersecurity, AI can comb through huge datasets – ones far bigger than any team of humans, no matter how fast, intelligent or diligent, could handle – to root out suspicious activity online. Backed by AI, cybersecurity systems can detect anomalies in behaviour, respond to incidents in real time and spot malicious activity before it has a chance to wreak financial or reputational havoc on your business or personal life.

Despite offering an approach to cybersecurity that’s faster, more accurate and more scalable than humans (not to mention the fact that it doesn’t need to eat, sleep, take coffee breaks or go on holiday), AI does have one distinctly human trait: the ability to learn.

That’s because an AI-powered security system will learn from every piece of data you feed into its system and every threat it faces. Like a weathered, battle-hardened general of war, AI cyber-fighting algorithms have seen it all – making them more adaptable, more capable and better equipped to deal with dangerous new cyber threats as they emerge. 

The rising importance of AI in cybersecurity

Valued at $22.4 billion (£18.4 billion) in 2023, the AI in cybersecurity market is booming.

However, by 2028, experts predict it will have almost tripled to an eye-watering $60.6 billion (£49.7 billion). The data also suggests that while almost all organisations already apply, or want to apply, AI in a cybersecurity context, less than a third (28 per cent) do so extensively, suggesting a gap between the need for AI and its actual adoption.

So, why is AI in cybersecurity so important? Well, because of the diverse and ever-expanding litany of cyber threats that businesses and individuals face in 2023. These include:

  • Distributed Denial of Service (DDoS) attacks: When attackers overwhelm a server with a flood of manufactured, illegitimate traffic to disrupt normal service
  • Cross-site scripting attacks: When cybercriminals insert malicious scripts (often JavaScript code) into webpages, which steal data from visitors or redirect them to other, illegitimate locations such as phishing websites
  • Structured Query Language (SQL) injection attacks: When hackers inject malicious SQL code into a database query to gain access to a website’s most sensitive information
  • Password or brute force attacks: Involve an attacker trying a range of different passwords to guess their way into an online account or website backend
  • Phishing attacks: When hackers trick individuals into handing over confidential information
  • Malware attacks: When cybercriminals use malware – insidious programs such as viruses, worms, trojans, spyware and ransomware – to corrupt a device or network

With the estimated 2,200 cyberattacks occurring every day – and 2.8 billion malware attacks taking place in the first half of 2022 alone – hacking is big business. Human analysts (no matter how skilled or dedicated) are no match for fraudsters, particularly when they’re armed with AI-equipped fraud-perpetrating toolkits.

With that in mind, the most dangerous threats need the best technology, and AI is it. So, let’s explore AI’s top cybersecurity capabilities and how they keep you safe online.

Predictive threat analysis

When the movie Minority Report came out in 2002, it painted the picture of a dystopian future in which crimes could be foreseen and stopped before they even had a chance to take place.

It’s a testament to how far AI has come in the past 21 years that the ability to predict and eliminate threats – before they unfold – is no longer pure sci-fi but a stark reality.

Predictive threat analysis is a subset of AI that involves sifting through vast amounts of data to identify the subtle trends, correlations and anomalies within. By training AI algorithms to understand the processes and patterns of normal activity, they can learn what the precursors of abnormal activity look like and use this knowledge to anticipate it.

Let’s take payment fraud prevention for an ecommerce business as an example. In a machine learning-enabled fraud detection approach, an AI algorithm would crunch the ecommerce business’s entire transaction history, which includes both legitimate transactions and those flagged as suspected or confirmed fraud.

In doing so, the algorithm can learn which items fraudsters target most, which devices they’re targeting the online business from and which countries are overrepresented when it comes to the origins of fraudulent traffic.

Armed with this contextual knowledge of known attack vectors and historical fraud tactics and techniques, the machine learning algorithms can flag any transactions that meet a risk threshold – for example, a $10,000 purchase from a high-risk country – for manual human review. Performing these steps can prevent unauthorised transactions before they go through.

Similarly, an AI algorithm might look at subtle changes in user behaviour, network traffic or system configurations to nip a data breach in the bud or, by monitoring sudden spikes in data access or unusual login times, flag an account that’s been compromised.

Predictive threat analysis represents a proactive response to cyber threat detection and prevention. It’s quite different from the more traditional, human-based approaches to fraud prevention, which – despite being effective against known threats – are ultimately reactive.

Predictive threat analysis Traditional threat responses
Operates in real time, allowing you to respond immediately to emerging cyber threats Relies on manual human analysis and periodic security checks, which often don’t keep pace with the evolving threat landscape
Adapts to new threat vectors by continually learning from new data Struggles to cope with zero-day attacks and emerging cybercriminal techniques
Analyses large data sets, which helps it minimise false positives (where benign activities are mistaken for threats) Lacks the analytical breadth and depth of AI-enabled solutions and tends to raise more false alarms

Automated response and actions

No matter how good your cybersecurity setup is, attacks are unavoidable. This means you  need to be able to detect and prevent threats before they arise or deal with them quickly and efficiently when they do.

Here, time is of the essence. Unfortunately, a timely response to fraud isn’t something most modern businesses have the best record with.

In 2022, it took organisations an average of three days after a cyberattack to discover it had even happened. Recent data from 2023 is even more scathing, with IBM suggesting the average time to identify a breach, depending on how it was identified, was:

  • 233 days (when disclosed by the attacker)
  • 203 days (when reported by a benign third party)
  • 182 days (when discovered by the organisation’s teams and tools)

That’s if it was even discovered. According to IBM, only a third (33 per cent) of data breaches were uncovered by the surveyed organisations’ internal security tools and teams.

Fortunately, it’s something AI will continue to improve. After detecting malicious activity in real time (not days later), AI algorithms can trigger immediate threat responses. 

These automated response systems are AI’s way of making split-second decisions – much as a human in an under-fire situation would – to mitigate the threat. However, unlike flesh-and-bone analysts, there’s no chance of AI’s conclusions succumbing to human error.

AI cybersecurity tools can minimise fraud’s disruption to the rest of the network by taking targeted, almost surgical action. This could include blocking a suspicious IP address, quarantining a compromised device from the rest of the network or disabling a user account to neutralise the threat while allowing normal operations to continue uninterrupted.

Notable uses for AI in cybersecurity

Failing to detect and neutralise cyber threats comes at a big cost for businesses.

Financially, the global average cost of a data breach is an enormous $4.45 million. Reputationally, data breaches represent bitter black marks against a business’s brand power, trust, credibility and, ultimately, its bottom line. Ask Yahoo!, which experienced a major data breach in 2013, then again in 2014, that compromised around three billion accounts; it prompted a litany of lawsuits and brand damage that Yahoo! still hasn’t recovered from.

What can be done? Below, we look at two of AI’s critical applications in cybersecurity –phishing detection and secure user authentication – to find out.

Since phishing is implicated in 16 per cent of data breaches, according to IBM, we’ll start there.

Phishing detection

Phishing is a form of cyberattack in which a fraudster tricks a person into giving up sensitive information – often posing as a legitimate entity, such as a bank or company.

Phishers “spoof” these businesses to send text messages and emails to their targets, creating a false sense of urgency or fear by telling them that their information has been compromised and their access to their online bank or social media accounts is at risk. The phishing scheme’s “bait” could also be a package that couldn’t be delivered and will be lost permanently if the user doesn’t log in and pay a customs release fee.

Some phishers – in a tactic called social engineering – call their victims, using a complex array of psychological techniques to manipulate and pressure them into handing over their most sensitive data. That could be credit or debit card details, personal information or the usernames and passwords to their online accounts. 

2023 data from the Home Office found that phishing was the most reported cybercrime in the UK, with 79 per cent of businesses and 83 per cent of charities falling prey to a phishing attack in the last year. The latest phishing statistics also indicate that there were 4.7 million phishing attacks in 2022 alone, so it’s a threat that all individuals and businesses need to remain aware of.

Fortunately, phishing is also a threat that AI-powered algorithms are already rising to meet through a branch of AI called natural language processing (NLP).

NLP focuses on the interaction between humans and computers through natural language. The goal? To read, decipher, understand and make sense of human language in a way that has value, like phishing detection.

AI-powered NLP algorithms can be employed to dissect the written contents of emails and discern the linguistic patterns and context contained within the content. Suspicious requests? Grammatical inconsistencies? Spelling errors? Urgent, hyperbolic or excessively persuasive or dramatic language? NLP algorithms comb through them and automatically filter out any emails with these sure-fire signs of spam before they can get anywhere near your cursor.

AI algorithms are also adept at picking up other clues from potential phishing emails by scanning attachments for malware signatures and scrutinising the destination of any embedded links. But it’s not only the words, documents or other elements of an email AI looks at – it’s the underlying patterns of an email account holder and their contacts.

By tracking sender behaviour over time, AI algorithms can stay alert to any sudden changes, such as a trusted contact sending an unusual attachment. Given that the most effective phishing attempts occur when the fraudster imitates one of the victim’s known contacts, this level of AI-powered functionality is fundamental.

Once AI cybersecurity tools identify a phishing email, they swiftly quarantine it before initiating a series of automated responses, including warning the target, disabling the malicious link or – in an organisational context – informing the IT team for further investigation.

Secure user authentication

Every day, we authenticate our identity in some way.

Whether it’s entering our password to log in to our email accounts or using facial recognition to verify a smartphone payment, user authentication processes are vital.

More traditional methods, such as passwords and PINs, are becoming increasingly vulnerable to hackers. Passwords are a common target of brute force attacks, where a hacker tries a range of different passwords over and over until they eventually guess correctly.

Enter AI, which is already changing the way we verify our identities in 2023. In fact, you’re probably already benefiting from AI-driven authentication.

A handful of the ways AI is shaping the future of user authentication include:

  • Facial recognition: AI-based systems analyse your unique facial features, mapping them with thousands of dots via infrared technology. With machine learning, these systems can identify and store even the most minute of facial details, making it an extremely reliable authentication method for everything from smartphones to airport security
  • Fingerprint recognition: Despite being associated with older smartphones, AI-powered fingerprint technology is becoming more intelligent. Sensitive to even the most subtle of fingerprint patterns, these systems ensure a high degree of accuracy
  • Voice recognition: AI can analyse your unique vocal patterns, including pitch, tone and volume, to create a “voice print” specific to you
  • Behavioural biometrics: By analysing user behaviour patterns, such as mouse movements, typing speed and interaction with devices, AI can build a profile of a user. Then, when deviations from these established patterns occur, the algorithms can quickly flag them for further investigation
  • Contextual authentication: AI-powered cybersecurity systems look at contextual information when authenticating a user. For example, if a user typically logs in from Wolverhampton but suddenly tries to access their account from Goa, the system will require extra verification steps. So, even if your login credentials are compromised, AI systems will only log you in when the context aligns with your typical behaviour
  • Risk-based authentication: By compiling (and continually analysing) data, machine learning algorithms can create risk scores – assigning a value to an online action based on its perceived danger. Once a transaction or interaction’s risk score surpasses a certain threshold, you’ll automatically be required to provide more information

Challenges and considerations

Like any transformative technology with AI’s striking, exciting potential, there are risks involved and factors a business or individual utilising AI for cybersecurity must consider.

Reliance on data

As shown, AI in cybersecurity’s key strength is its ability to incrementally learn through the process of analysing big data. However, that continual reliance on sets of extensive, high-quality data is also one of AI’s key Achilles’ heels.

This means that your AI-driven approach to cybersecurity will only be as effective as the data you’re able to feed it. Your datasets not only need to be relevant and high quality but also diverse to cover various attack scenarios and patterns and give your algorithms the best chance of preventing, detecting and flagging threats while minimising false declines.

Your datasets also need to be accurate. Plugging incorrect or incomplete datasets into AI cybersecurity models will result in flawed predictions and skewed outcomes.

Similarly, you’ll need to ensure the training data is as free of bias as possible. Bias is an inherently human disposition. Despite algorithms obviously not being human, they can, and do, inherit our biases and historical and social inequities. When these taint a dataset, they will be reflected in the AI’s filtering and decision-making processes.

How might AI bias look in a cybersecurity context? Well, an AI cybersecurity tool trained by US programmers, with algorithms created by Americans and fed with US-centric datasets, will most likely be set up with a focus on the US’s biggest rivals: China and Russia, for example, or other states with hostilities towards the US.

However, the data suggests that while China is responsible for the most cyberattacks (18.8 per cent of the global total), the US runs a very close second at 17 per cent. This could lead the American algorithm, preoccupied as it is with external, international threats, to overlook the domestic dangers lurking within its borders.

Evolution of cyber threats

Another challenge of AI? Like many of the real (and fictional) world’s most powerful forces, it can be harnessed for both good and evil.

Just as AI cybersecurity tools are evolving, so are AI-propelled threat vectors, leading to ever-increasing, ever-evolving methods of cyberattack and manipulation. These include:

  • Deepfakes: AI-generated images or audio, which simulate individuals doing or saying things they didn’t, can enable social engineering attacks. A cybercriminal could create content impersonating a high-profile figure within an organisation and use it to manipulate its employees into revealing sensitive corporate information
  • Password cracking: Remember the brute force attacks we discussed earlier, where hackers try a vast range of different password combinations to gain unauthorised entry into a user account? Well, this process can be automated with AI. Even worse, machine learning algorithms can learn from breached data by using patterns from successfully cracked passwords to guess new ones more effectively and at scale
  • Data poisoning: Cyberattackers go straight to the source – the data. When a cybercriminal poisons data, they manipulate the datasets, feeding an AI algorithm and leading to skewed or incorrect results. A poisoned dataset could, in the case of cybersecurity AI, cause algorithms to misclassify malicious activities, creating fatigue-inducing false positives while allowing actual threats to fly under the radar
  • Model inversion attacks: This is when attackers use AI to reverse engineer machine learning models by extracting sensitive information from innocuous outputs

By reappropriating the positive benefits of AI for malicious means, cybercriminals can take advantage of one of AI’s greatest drawcards – automation.

Some ways that hackers can repurpose AI’s automating qualities to do harm include:

  • Automated phishing attacks: Skilled cybercriminals can feed public and social media data into algorithms and use these to craft personalised and more convincing phishing messages. AI can also be used to analyse previous successful phishing messages to understand what works and create more realistic phishing communications based on historical data
  • Automated social engineering attacks: AI-powered chatbots are already a feature of many online stores, and when used for good, they can assist customers with information, point them to the correct part of a website or connect them to a live support agent. However, when used for bad, chatbots can trick users into divulging confidential information or clicking on malicious links
  • Automated DDoS attacks: AI can automate and optimise DDoS attacks, increasing their effectiveness and making them harder to mitigate against

In summary: Implementing AI in cybersecurity

To say AI might change the cybersecurity landscape is off the mark. To say AI is already transforming cybersecurity is more accurate. However, to say that AI has already drastically and irrevocably changed cybersecurity – and will continue to do so for lifetimes and generations to come – is the statement closest to the truth.

This means the question for you isn’t whether you should integrate AI-led approaches into your private or professional cybersecurity setup but when. And that “when” is now.

But how? Some of the strategies you can use to get started include:

  • Learning more about AI-fuelled cyber threats and attacks and educating yourself (and, if applicable, your staff) on how to combat them
  • Implementing AI-based security solutions, such as AI-enhanced firewalls, antivirus, phishing detection software and AI-driven behavioural analysis tools
  • Utilising AI-powered biometric authentication methods – including fingerprint, voice and facial recognition – in combination with PINs or passwords
  • Implementing AI to prioritise and automate software updates and patch vulnerabilities
  • Collaborating with reputable AI security consultants and vendors to tailor a cybersecurity solution to your specific needs and ensure it’s configured to the unique threats you or your business are most likely to come up against
icons8-question-64

Want to learn more about cybersecurity’s state of play?

Explore our top 10 ways to ensure your anonymity and privacy on the internet, and find out how the best Virtual Private Networks can form part of a package of solutions that can help protect you from online threats.

Frequently asked questions about AI in cybersecurity

AI boosts the speed of cyber threat detection by doing the work of hundreds or thousands of human analysts – without tiring or taking time off. AI algorithms comb vast data sets, spot patterns and flag anomalies in real time. It not only identifies threats but predicts them, and providing you keep it fed with recent, relevant data, it will only become smarter with experience.

Because AI cybersecurity tools operate with such an impressive degree of accuracy, they all but eliminate human error, reducing the chance of legitimate transactions getting flagged as fraud and blocked. It speeds up the time-intensive process of untangling those mistakes. AI is also capable of automating responses to fraud, enabling fraud teams to act faster than if they were relying on manual threat detection processes alone.

Yes, absolutely – AI in cybersecurity certainly comes with risks.

Adversarial attacks can trick AI systems with inputs specially crafted to make them misbehave, leading to security vulnerabilities. Other factors to consider include biases that AI systems can inherit from their training data and the lack of transparency into the exact makeup of the algorithms.

Add to this the privacy issues that greedy data requirements raise – as well as the fact that an over-reliance on AI might neglect the human kind – and it’s clear AI poses challenges. Vigilance, constant monitoring and a blend of human insights along with the artificial will all be key to maximising AI’s value and mitigating its more worrying implications.

By enabling and drawing insights from facial, voice and fingerprint recognition technology, AI is making user authentication smarter and safer.

One facet of AI-enabled authentication, behavioural biometrics, flags a user’s mouse movements and typing speed to learn patterns. Another school of AI, called contextual authentication, focuses on building up a bank of information about how and where a user typically logs in. When suspicious conditions are met – for instance, a user logging in from an uncommon location or unknown device – the AI algorithms will assign it a high-risk score, indicating potential fraud. From here, further authentication will usually be requested.

While AI is a powerful ally in the bid to keep fraudsters and hackers at bay, it won’t replace human roles – at least, not entirely.

That’s because humans offer, well, humanity. We offer creativity, critical thinking and ethical judgement, all aspects unique to the human condition that robots are unable to bring to their work. When it comes to solving complex problems, developing strategies and understanding the broader context of security issues, human expertise will remain indispensable in the cybersecurity space. We should view AI not as our replacements but as a tool enabling us to do our jobs faster and more effectively.

Rob Binns

Writer

Rob is an experienced writer and editor, with a wide range of experience in many topics, including renewable energy and appliances, home security, and business software. He has written for Eco Experts, Home Business, Expert Market, Payments Journal, and Yahoo! Finance. . 

Rob has a passion for smart home technology, online privacy, as well as the environment and renewables, which leads him to the Independent Advisor where he writes about related topics, including cyber security, VPNs, and solar power.

Amy Reeves

Editor

Amy is a seasoned writer and editor with a special interest in home design, sustainable technology and green building methods.

She has interviewed hundreds of self-builders, extenders and renovators about their journeys towards individual, well-considered homes, as well as architects and industry experts during her five years working as Assistant Editor at Homebuilding & Renovating, part of Future plc.

Amy’s work covers topics ranging from home, interior and garden design to DIY step-by-steps, planning permission and build costs, and has been published in Period Living, Real Homes, and 25 Beautiful Homes, Homes and Gardens.

Now an Editor at the Independent Advisor, Amy manages homes-related content for the site, including solar panels, combi boilers, and windows.

Her passion for saving tired and inefficient homes also extends to her own life; Amy completed a renovation of a mid-century house in 2022 and is about to embark on an energy-efficient overhaul of a 1800s cottage in Somerset.