The Impact of AI on Cyber Risk
Artificial intelligence has the potential to increase phishing, malware, and at-scale cyber attacks
Concerns are rising in the insurance and security industries about managing cyber risk amid the growing popularity of artificial intelligence (AI) tools such as ChatGPT.
These concerns are well founded. AI used with malicious intent has the potential to advance certain cyber attack techniques quicker than ever, but it’s important to understand the capabilities and limitations of AI in order to understand its implications on cyber risk.
What AI will enable is more frequent and faster attacks at a larger scale. We can’t yet conceive of all the new risks that may emerge, but the 3 primary areas of concern that cyber insurance and security providers should prepare for in the age of advanced AI are phishing, malware, and at-scale attacks.
What Kind of AI Are We Actually Talking About Here?
The concept of artificial intelligence has been around for decades, but generative AI is the latest concern among cyber security and cyber insurance providers due to the explosion in the availability of powerful new AI tools to anyone with an Internet connection.
Generative AI uses algorithms to synthesize content based on a provided prompt. Several new generative AI tools, such as ChatGPT and Google Bard (AI writing assistants) and Github Copilot (an AI coding tool), rely on increasingly sophisticated large language models (LLMs). These are the types of AI tools we will discuss in this article.
AI Will Likely Make Targeted Phishing Attacks Easier
Spear phishing is a highly targeted form of phishing in which attackers do their research to find the names, companies, titles, and email addresses of real employees. They create spoofed email addresses that are difficult to distinguish from legitimate ones, then craft personalized messages that appear to come from coworkers, executives, vendors, or partners.
You’ve probably received a spoofed email more than once — where the email appears to be from your bank, employer, or even a government agency, but actually links to a fraudulent website. In the past, these have been easier to spot when they contain awkward phrasing or incorrect language.
The concern is that AI will make this high-effort type of phishing easier and more efficient, drastically improving success rates. LLMs like ChatGPT can help attackers launch spear phishing attacks at scale because they make it much easier to craft personalized, detailed messages with substantially less effort. For example, an attacker can ask an LLM to write a personal email that appears to be from a company’s CEO to its Head of Finance requesting an urgent funds transfer, and can even include specific details in the prompt about the context and tone of the request. There’s even an LLM favored by hackers called WormGPT, which was designed specifically for malicious activity and which has demonstrated the ability to write cunning emails that can be used in sophisticated phishing attacks.
While the increased ease of crafting personalized phishing emails is the current concern, generative AI also has data-mining capabilities that could help threat actors gather personal and company information to execute spear phishing attacks even more efficiently in the near future. The precise risk here is what is known in the intelligence field as “aggregation.”
Basically, a skilled researcher can assemble non-sensitive or unclassified information to provide a complete view of something that would otherwise be considered a secret. Generative AI with real-time access to the open Internet will probably be exceptional at aggregation, which will make reconnaissance much easier for attackers in preparing targeted attacks.
Vulnerability Identification and Exploit Development May Accelerate Due to AI
Vulnerability identification is objectively challenging, whether you’re a cyber security expert or a threat actor. Developing reliable exploits for found vulnerabilities can also be difficult. Leveraging AI can be a way to simplify and accelerate both.
ChatGPT has already demonstrated an ability to find exploitable vulnerabilities and to develop sample exploits for them when it has access to target application source code. Fortunately, source code for commercial software is usually a closely guarded secret and not publicly available.
But as generative AI gains the ability to perform analysis on compiled binaries in order to identify exploitable vulnerabilities — a use case that hasn’t yet been demonstrated as far as we know — then we’re in big trouble. Given the similarity of this use case to others in which ChatGPT has proven itself, it’s likely only a matter of time. Machine code is just another language that LLMs like ChatGPT should be able to master, like how they read plaintext source code.
The challenge will be getting the training content. There’s no widely available deposit of machine code, like StackOverflow Python, C++, etc. It’s reasonable to expect that data source to be compiled to reverse engineer code from binaries and use it as a training set.
As AI capabilities continue to evolve, we expect that LLMs will be able to help attackers accelerate the identification of exploitable vulnerabilities and the development of exploits.
We Anticipate an Increase in At-Scale Cyber Attacks Using AI Coding Tools
Building an at-scale cyber attack is a long and resource-intensive effort. It typically takes weeks or even months from the discovery of a zero-day vulnerability until they can unleash at-scale exploitation. This cascading nature of cyber CAT risk gives InsurSec providers like At-Bay ample time to help policyholders apply patches and mitigate risk before catastrophic losses are realized.
This could change with new AI-based tools. AI coding tools like GitHub Copilot, Amazon CodeWhisperer, and Tabnine can help attackers write code with increased speed and accuracy, thus accelerating the potential pace of at-scale exploits.
The use of AI coding tools will not change the cascading nature of cyber CAT risk. However, it could significantly shorten the window of time in which intervention is possible, making an effective CAT risk management program more crucial than ever to prevent at-scale cyber attacks and widespread loss.
We want to note here that the impact won’t be unique to cyber criminals engaged in exploit weaponization and malware creation. Defenders will also be able to use AI to do things like create speculative detection signatures for exploits that don’t even exist yet. Using generative AI tools well is a bit of an art and a science, and we’ll have to wait and see whether defenders will figure out how to leverage it better and faster than attackers.
How to Manage Cyber Risk in the Age of Advanced AI
To counter the sophistication, scale, and speed of executing cyber attacks using AI, defenders must become faster and more accurate as well. We expect more differentiation between best-in-class and average cyber security vendors and insurers. Here are our top recommendations for securing your organization:
- Email Security: It’s crucial to choose a secure, cloud-based email solution and layer it with a top-performing email security solution that can keep up with evolving phishing attacks. Our research has shown that deploying the top-performing email solution decreases the risk of cyber attacks by as much as 40%. See which email solutions and email security solutions are the most effective at preventing cyber incidents in our 2023 Ranking Email Security Solutions Report.
- Endpoint Security: Endpoint detection and response is a system to gather and analyze security threat-related information from computer workstations and other endpoints, with the goal of finding security breaches as they happen and facilitating a quick response to discovered or potential threats. Because we expect EDR providers to begin using the same AI models to improve their defenses, it’s more important than ever to identify best-in-class tools, because these will be the most effective at preventing attacks in this new world of advanced AI.
- Active CAT Management: We expect the tempo of at-scale events to rise as threat actors increasingly adopt AI, so Active CAT Management will be significantly more important to keep up in a world where catastrophic (CAT) events can unfold and create widespread loss faster than ever. Partner with an insurance provider with a robust Active CAT Management program that allows them to discover when a CAT event is pending and intervene to reduce CAT risk.
AI may accelerate cyber risk, but that doesn’t make cyber security obsolete. Instead, it will quickly separate the under-performing security solutions from the top performers, making it more important than ever for organizations to partner with providers who can help them build the right posture and avoid attacks.