AI, particularly generative AI (GenAI), is having a growth spurt of Big Bang proportions. Two or three years ago, GenAI was nowhere to be found in government or business applications; now, it’s seemingly everywhere.
Splunk’s State of Security 2024: The Race to Harness AI found that 93% of organizations — and 91% of security teams — say they use GenAI, although 65% said they don’t fully understand the technology or its implications and 34% said their organizations lack a GenAI policy.
One thing everyone agrees on, however, is that AI will have a significant impact on cybersecurity. The global market for GenAI in cybersecurity is expected to grow to $3.13 billion by 2033 from $628 million in 2023.
The technology can greatly enhance monitoring, detection, response and recovery capabilities, but also allow threat actors to sharpen their attacks. However, that doesn’t change the threat landscape as much as it accelerates each type of risk.
“First of all, let’s define what AI is, and what it isn’t — and how it’s connected to governments,” said Igor Lys, Founder of Gambit, a government advisory group, and Secretary General of the Government Tomorrow Forum, an international public/private partnership. “AI today is language models, predictive algorithms that are very powerful but are only good for specific tasks and, most of all, are for working with textual data. It is not an intelligence in the sense of something with a point of view, a capacity to understand things or being able to generate independent tasks for itself.”
It may be only a tool, but it’s a powerful one that can add much to cyber defenses, if organizations remember that AI-enhanced cybersecurity still depends on people. “The danger of AI is not coming from any new angle,” Lys said. “In most cases, the weak link in cyber protection schemes is human, not the machine.”
Here’s a look at the ways GenAI can help government agencies, businesses and other institutions better protect their systems and data. For more details, click on the numerous hyperlinks throughout the article.
The Cybersecurity Benefits of GenAI
CYBER HYGIENE
The basics of cybersecurity, from using reputable software and promptly installing patches to encryption, fall under the umbrella of cyber hygiene. AI enhances many of these practices with real-time analysis, pattern recognition and, especially, automation.
“There are numerous examples of AI being used in government agencies to maintain cyber hygiene autonomously, without human supervision,” Lys said. For example, the Department of Defense (DoD) uses AI-driven endpoint detection and response tools to monitor and analyze activities on computers, servers, mobile devices and other endpoints to gain a comprehensive view of a system’s activities and health.
ENHANCING BIOMETRICS
Fingerprints and facial and voice recognition are essential components of multifactor authentication, which organizations increasingly use to deter identity compromises. GenAI can generate synthetic but very realistic biometric templates that improve biometric authentication systems, preventing attempts to spoof systems with deepfake biometrics.
THREAT DETECTION AND IDENTIFICATION
Collecting and analyzing vast amounts of data enables AI models to quickly detect patterns and anomalies such as unusual user behavior or unexpected spikes in network traffic that could indicate the presence of malware, a phishing campaign or other attacks. Using automation, AI can identify potential threats in real time, send alerts to security teams and, in some cases, initiate a response.
Phishing detection and prevention are important uses of AI, since phishing is the most common attack channel that bad actors use to compromise user identities and facilitate ransomware and other attacks. A GenAI model can analyze an email asking for the recipient’s login information and detect signs of fraud in the sender’s address, website links or grammatical errors. The model then can alert the user and the organization’s security operations center.
PREDICTIVE ANALYSIS
The ability to analyze massive datasets and perform advanced pattern recognition also allows GenAI to predict threats and recommend steps to prevent them.
APPLYING SECURITY PATCHES
Overworked security teams often fall behind on patching because of the time and effort it requires, but with automated security patch generation, GenAI can streamline the process. GenAI can analyze a flaw, generate a customized patch, test it in a controlled environment (without exposing the production network) and then apply that patch to vulnerable programs or systems.
INCIDENT RESPONSE
AI’s automation and analytics capabilities can strengthen each stage of incident response. Automated triage identifies the most critical incidents, allowing teams to focus on the most serious threats. AI also enables the use of automated playbooks that initiate responses, including basic steps such as blocking malicious IP addresses and isolating systems.
In today’s cyber landscape, response isn’t limited to individual agencies or organizations: Information sharing also is an important part of cyber defense. CISA recently staged a public/private tabletop exercise on AI security incidents, with the goal of identifying opportunities and protocols for information sharing and operational collaboration.
SECURE SOFTWARE DEVELOPMENT
GenAI is used widely by software developers, who were among its first enthusiastic adopters. And although unsupervised AI code writing contributes to the problem of insecure software, AI can also be part of the solution. When they find a flaw in code, AI models can help developers understand the problem and fix it faster. AI can also more efficiently carry out code reviews and generate tests to ensure that fixes were made and the code behaves as expected.
REMEDIATION
AI-powered security platforms can use predefined protocols and runbook strategies to isolate systems affected by an attack, stopping the spread of the intrusion and limiting damage. Meanwhile, AI analysis of both real-time and historical data can give security analysts insights to help speed recovery.
THREAT HUNTING
Manual threat hunting is typically a time-consuming, painstaking process involving teams of experts and numerous false positives. AI can help make effective threat hunting a reality with its powerful processing, accuracy at identifying threats and ability to detect patterns. High-speed automation and the ability to “learn” as it goes, adapting to changes in threat tactics, make AI an ideal tool when hunting for risk.
This article appeared in our guide, “Government Gears Up for a Better Cyber Future.” To see more about how agencies are keeping on top of security basics, while staying agile enough to respond to emerging threats, download it here:
Leave a Reply
You must be logged in to post a comment.