By Mark Nasila

In early 2020, cybersecurity experts warned that cyberattacks were becoming more sophisticated, with cybercriminals exploiting artificial intelligence to launch increasingly complex attacks, and the sensitive data of the healthcare sector becoming a growing target. Then, to complicate things, the Covid-19 pandemic arrived, promoting an exponential increase in remote work.

Remote work, combined with the attractiveness of healthcare data for cybercriminals, resulted in the healthcare sector becoming the most targeted industry in 2020, with 66% of companies experiencing some sort of attack, a 22% increase from the previous year, according to Check Point Research.

Why would remote work worsen the situation? Because the sudden shift to working from home meant companies were often unable to ensure information security practices for remote workers were as robust as those enjoyed when on site.

South Africa’s Life Healthcare Group, which manages nearly 70 health facilities, was hit by a prolonged and complex cyberattack. Meanwhile, Trend Micro recorded millions of threat detections in Africa from January 2020 to February 2021, including 679 million malicious e-mails, 8.2 million malicious files and 14.3 million malicious websites. According to the Ponemon Institute, a data breach in South Africa costs an average of R36.5-million, with additional costs felt for years afterwards, and the country ranked seventh out of 15 countries polled for the highest costs of data breaches.

AI-powered cyberattacks

For as long as businesses have been online, cyberattacks have posed a threat. But over the last half a decade, their complexity, availability and efficiency have ramped up significantly. New technologies like AI have made it easier for cybercriminals to scale up their attacks and make them more effective. Consequently, businesses have had to employ their own AI-powered defence mechanisms to combat these ever-evolving threats.

The rise of AI-powered attacks was inevitable – because new technology is never limited to those with good intentions. In the same way a business can benefit from AI, malicious actors have harnessed it and exploited those parts of it which are best suited to trying to compromise targets.

So, what might these attacks look like?

Many of the most effective cyberattacks work because they don’t look like cyberattacks at all. For instance, materials that mimic a company’s internal e-mails or other documents (when these are used to target high-level executives or other specific targets it’s known as “spear phishing”). Meanwhile, AI-powered malware can use machine learning to hunt for cracks in systems without making its presence known. It could, for instance, assess network traffic and then ensure its activities mimic it so as not to raise any red flags. Similarly, AI systems can be trained to constantly look for cracks in systems that humans can then exploit.

Numerous instances of AI-backed attacks have already taken place. Freelancer platform TaskRabbit saw 3.75 million user accounts compromised and their financial information stolen. The attackers used a coordinated network of compromised “zombie” computers controlled via AI to launch a distributed denial of service attack on TaskRabbit’s servers.

One of the worst and most infamous cyberattacks took place in mid-2017. The WannaCry ransomware attack affected over 200 000 computers in 150 countries and spread in a matter of days. When infected, a computer’s files would be encrypted and the only way to unlock and access them was to buy decryption software from the attackers.

In 2019, an executive at a UK energy provider was tricked into transferring more than US $240 000 to hackers after they used AI to fake the voice of his superior. And social media is being used to spread disinformation and misinformation via “deepfakes” — videos featuring photorealistic digital likenesses of famous figures saying things they’ve never actually said.

A report from Interpol last year revealed that the top five cyberthreats facing South Africa are online scams, digital extortion, business e-mail compromise, ransomware and botnets. In a recent incident, the department of justice & constitutional development saw its IT systems compromised and encrypted. The effects were significant, with courts having trouble running and delays in payments to maintenance beneficiaries.

Fighting AI with AI

Combating these attacks means using AI, too, because that’s the only way to achieve the same complexity, speed and effectiveness inherent in the attacks. But it also requires using machine learning. While ML is often conflated with AI, it’s distinct in that it allows systems to learn from data rather than simply being programmed to respond to it. That ability to evolve makes ML capable of inference and prediction. This allows for a form of Darwinism to be built into security systems. “Evolutionary computation” uses the key ideas of biology (inheritance, random variation and selection) and applies them to potential attacks. Solutions are initialised, selected and allowed to mutate, before being selected again. The weakest are killed off, and the most successful become the basis for future defences.

While AI has many other use cases in businesses, it’s security where it’s seeing the most uptake. A TD Ameritrade study found the AI cybersecurity industry will likely grow at a 23.3% annually and compounded, from $8-billion in 2019 to a staggering $38-billion in 2026. And an MIT Technology Review survey found that most respondents (96%) report they’ve begun to guard against AI attacks, many by enabling AI defence mechanisms.

Examples of AI in cybersecurity

There are various ways leading cybersecurity organisations are incorporating AI capabilities like ML and computer vision into their cyber defenses. First, there’s social engineering and spam detection. Deep-learning models can help defend against new forms of attack that are only vaguely defined. Google, for instance, uses them to defend against e-mails that rely on images to trick users, or those sent from new domains.

Then there’s anomaly detection, where machine learning tries to spot deviations from patterns gleaned from far larger sets of instances than humans can assess. For example, by continuously monitoring network traffic for variances, an ML model can detect irregular patterns in e-mail sending frequency that might point to the use of e-mail for an outbound attack.

ML can also defend against “DNS tunnelling”, where DNS queries are used to infiltrate IT systems, and can provide advanced malware detection by making inferences from previous malware attacks. Further, ML can help reduce alert fatigue, which is the risk of a security team being overwhelmed by incident alerts, especially those that are false positives. Security teams retain ultimate control but can be freed up to focus on higher-level tasks.

Finally, there’s ML’s ability to identify zero-day exploits. ML can be used to recognise and patch zero-day vulnerabilities or activities, including those that come with updates or changes to systems. At a more granular level, natural language processing can be used to check source code for errors or malicious content, while “generative adversarial networks” can be used to help pinpoint complex vulnerabilities.

Preparing for AI-powered attacks

AI can be used to enhance threat identification, whether through bot-identification systems that monitor communications or social media channels proactively or systems that scan for malware attacks. These need to be kept updated so they’re able to contend with the latest developments in attack vectors. It’s also critical to have an integrated approach to cyberthreats to remain abreast of these developments.

This approach is enabled by AI-powered observations and analysis that can deliver consistent and continuous real-time risk predictions, provide risk-based vulnerability management, and allow for proactive control and recovery when breaches occur. This helps make cybersecurity teams more effective, efficient and reactive when problems arise, and can better position them to fend off future attacks even if they’re of a sort that hasn’t been tackled before.

Finally, it’s essential to avoid complacency. A business that constantly assumes it’s under attack will always be thinking about ways to thwart would-be attackers.

Dr Mark Nasila is chief analytics officer in FNB’s chief risk officer

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: