InfoGuard Cyber Security and Cyber Defence Blog

Fighting cyberattacks – AI both as a shield and a weapon

Written by Jolanda Muff | 23 Jul 2021

Cyber attacks are becoming ever more sophisticated. The new waves of attacks are even outstripping humans, and cyber criminals are increasingly relying on artificial intelligence (AI). AI is being used both as an offensive weapon and to defend against cyber attacks. This is a headache for cyber security experts, but it also offers them an opportunity to strengthen their own defences against cyber attacks and identify attackers more effectively. In this blog post, you can read about the threats and risks your company is exposed to, and how artificial intelligence can help you detect and defend yourself against them.

AI in Cyber Security

Cyber security tools based on artificial intelligence (AI) have emerged to deal with the huge flood of alerts, and detect attacks based on them. These tools can help security teams reduce the risk of security breaches, and efficiently and effectively enhance their security response.

AI is the term applied to technology that can understand, learn and act on the basis of acquired and deduced information. The more data AI systems analyse, the more intelligent they become, the more they learn from experience, and the more powerful and autonomous they become (machine learning). Over time, this technology learns using past insights to identify new kinds of attacks. Behavioural histories are able to profile users, assets and networks enabling AI to detect and respond to any deviations from the established norms.

Machine learning (ML) or deep learning are already part of current AI technology. They are able to quickly analyse millions of events and identify many types of threats, from malware that exploits zero-day vulnerabilities to detecting high-risk behaviour that could lead to a phishing attack or to downloading malicious code.

AI acting both as an assistant and a protective shield

AI systems are particularly adept at recognising and comparing patterns by rapidly filtering out and processing the key points from large amounts of data. Using this pattern recognition, it is possible to easily and, above all, quickly uncover hidden channels that are used to siphon off data. We have compiled a few examples for you:

  • Recognising and reacting to cyber-attacks: AI assists security teams with the often very time-consuming process of detecting, combating and investigating security incidents. Alarms that have been triggered are automatically assessed and prioritised, thereby minimising false alarms. Secondly, AI-based Network and Endpoint Detection & Response (NDR/EDR) systems help teams investigate and combat threats by making complex corporate networks intuitive and delivering alerts directly with contextual information. The next stage is Extended Detection and Response (XDR). XDR abstracts, correlates and consolidates information from the various data sources to create an illuminating global picture of an IT infrastructure's threat landscape.

  • Identifying Spam Mail: The limits of conventional filtering methods for identifying and classifying spam mails with statistical models, blacklists or database solutions have been reached. AI solutions can help recognise and learn the complex patterns and structures of spam e-mails.

  • Malware Recognition: Conventional malware detection is mostly based on checking file and programme signatures. If a new form of malware appears, the AI compares it with previous forms in its database and decides whether the malware should be automatically blocked. The AI of the future could develop in such a way that it detects ransomware, for example, before it encrypts data.

What will AI look like in the future? Artificial intelligence could learn which programmes are opened by a malicious code, which data files it overwrites or deletes, which data is uploaded or downloaded, etc. The AI algorithm will then be able to proactively look for traces on users' computers. The AI algorithms may also soon be able to find out attackers' identities. Programmers leave individual footprints in their programme code. Among other things, these are found in the style of comments that programmers append to their programme lines. Learning algorithms can extract these traces and so assign the code to an author.

AI as a tool for cyber-attacks

Conversely, AI has also enabled cyber criminals to disguise and make their attacks even more professional. AI can use penetration techniques, behavioural analysis and behavioural mimicry to carry out attacks much faster, more efficiently and in a more coordinated way, on thousands of targets at the same time.

Cybercriminals use AI that automatically scans the victim's IT interfaces for vulnerabilities. In the event of a hit, these systems are able to discern whether an attack on the vulnerability can paralyse the system or whether it could be a gateway for malicious code. On the dark net, hackers are already offering AI-based systems as AI-as-a-Service. These are ready-made solutions for criminal hackers who are not very knowledgeable about artificial intelligence.

Most frequently, cyber criminals use AI in conjunction with phishing or malware-infected e-mails. Using AI, it is possible to imitate the recipients' user behaviour even more closely, making it difficult for them to distinguish these from real e-mails because the texts are of high quality. AI systems can be used to extract information available online in a more targeted manner, in order to adapt websites, links or e-mails to an attack target. This way, it is also possible to adapt phishing e-mails to reflect the sender's writing style. At the same time, AI-based attacks like these are learning from past mistakes and successes, and they are improving their tactics each time they launch an attack.

When AI combats AI

Offensive AI cyber attacks are fearsome, and the technology is fast and intelligent. Examples of offensive AI include malware creation, password guessing, fake social media profiles and media manipulation. Think deepfakes, a type of weaponised AI tool that involves fake images or videos depicting scenes or people that have never even existed at all. Voice deepfakes have also been used. Attacks with a similar-sounding voice are very convincing here – existing people's voices are misleadingly imitated. Given this danger of deepfakes, it is advisable to put in place a process that includes telephone confirmation for business-critical activities such as financial transactions or the transfer of research and customer data.

Humans and Machines – the Dream Team?

However, precisely because of the rapid developments and new waves of attacks, cyber security should not be left exclusively to artificial intelligence (AI). Only humans and machines, working as a team, can successfully fight cyber attacks.

However, AI-powered systems can provide improved context for prioritising and responding to security alerts, in order to respond quickly to incidents and uncover root causes that mitigate vulnerabilities and avoid future problems. Human beings are no longer able to provide adequate protection for the dynamic enterprise attack surface, so AI provides much-needed threat analysis and identification that cyber security professionals can respond to in order to reduce the risk of breaches and improve the security posture.

AI at InfoGuard

Here at InfoGuard, we also rely on AI in our Cyber Defence Center. This is because cyber defence is about finding a needle in a haystack, the traces of cyberattacks in the network as quickly as possible and reacting to them immediately. We are only able to do this thanks to a combination of intelligent systems and specialised analysts. Our experts are supported by detection & response solutions provided by our partners, who include Tanium (EDR), Vectra AI (NDR) and Palo Alto Networks (XDR).

Would you like to learn more about AI-based cyber defence, or are you interested in our cyber defence services? Then get in touch with our experts right now.