Generative AI has reshaped the way organizations operate these days. From generating content, videos, and images to writing codes, they can do it all. However, irrespective of this metamorphosis, tech leaders in the industry including Elon Musk are advocating to put the brakes on AI development, or at least ease up a bit. Why? Well, while AI technology has countless spectacular and fascinating prospects for society, there also exists a spectrum of potential negative and unconventional outcomes that could unfold.
Regrettably, this transformative journey has already cracked open Pandora’s box, unveiling a new cybersecurity challenge — AI-driven phishing attacks.
In earlier days, identifying phishing emails posed little difficulty. If a hacker unfamiliar with a certain language tried to pull it off, their emails would often feature discernible indicators such as flawed grammar, irrational vocabulary, and substandard orthography. These conspicuous discrepancies were easy to pick up by automated security measures and reasonably vigilant individuals. Unfortunately, with generative AI, it’s a whole new ball game!
Government Assets, A goldmine for Hackers
Weaponry blueprints, urban development strategies, sensitive access maps for electric grid and nuclear facilities, personal information of high-level officials — civil institutions house a trove of highly valuable information worth a dime in the black market. These alluring offerings attract buyers spanning rival nations, terrorist factions, and multinational corporations seeking an upper hand in foreign markets.
When it comes to infiltrating government systems, phishers require elaborate tools and viruses as civil institutions usually deploy sophisticated security protocols. Therefore, phishers opt for a subtler approach — targeting the human factor steering those operating systems. Leveraging AI capabilities, they fabricate deepfake content to manipulate officials into divulging confidential data. Furthermore, AI can aid in the formulation of intelligent responses to messages. It can create convincingly genuine websites or documents for end-users, and if the situation demands, hackers can employ AI-generated voices, acquired by extracting recordings from unsolicited spam calls, to respond in real-time.
When it comes to preserving a civil institution’s digital assets, the human element is a decisive factor. Such organizations cannot afford malicious insiders. For instance, a former employee of the U.S. Department of Energy (DOE) and the U.S. Nuclear Regulatory Commission (NRC) faced charges in January 2015 for attempting a “spear-phishing” attack. The attack targeted numerous DOE employee email accounts, aiming to compromise and disrupt U.S. government computer systems housing sensitive nuclear weapon-related data. His intention was to provide foreign entities access to classified information or disrupt critical systems.
It is equally vital to train employees to be as good at sniffing phishing mails as hackers are at smelling vulnerabilities.
Fortifying Against AI-based Phishing
While businesses are frantically trying to shield themselves from ransomware attacks, we are yet to prepare for an approaching avalanche of synthetic media. There’s no silver bullet solution to solve the phishing crisis; rather what we need is a holistic, multi-faceted approach towards cybersecurity.
The first line of defense should begin with securing the most vulnerable digital link in a security chain- endpoints. Adopting a unified endpoint management (UEM) solution will make sure that IT admins have all their eyes on the wall of corporate assets. When it comes to phishing, organizations need to rigorously filter the incoming traffic. Through UEMs, IT personnel can block dubious applications and malicious websites. Furthermore, they can remotely set firewalls and filter out emails so that only authorized communication can pass through. Phishers are after your credentials, and it’s always safer to deploy UEMs or other password managers to configure stringent password policies.
The current landscape demands a “Never Trust, Only Verify” approach. Highlighting the significance of this paradigm shift, President Biden stated in his executive order that incremental improvements will not give the security that organizations need, and that government institutes must adopt a zero trust (ZT) architecture by the end of the fiscal year (FY) 2024. Implementing ZT ensures that every remote user needs to be authenticated, authorized, and verified based on their identity before providing access to the corporate assets.
Balancing the Scales of Attack and Defense
As the saying goes, “fight fire with fire,” adversarial AI needs to be fought with defensive AI. Recently, organizations have ventured into the realm of Generative Adversarial Networks (GANs), a subset of the deep learning model, capable of generating synthetic datasets and simulating mock social engineering attacks. GANs can anticipate potential attack vectors that malicious actors are yet to deploy, enabling proactive countermeasures. While cybercriminals can also use these capabilities, the idea is to think like an attacker and be prepared to face one.
Machine learning algorithms, anomaly detection and real-time monitoring together play a pivotal role in identifying and mitigating potential security breaches. Machine learning can scrutinize the phrasing of incoming emails and compare against patterns of past attacks, following which unusual message patterns can be flagged. Leveraging AI/ML technology, computer vision applications can meaningfully extract information from visual data to fortify AI systems. For instance, by scanning through files and analyzing web pages, computer vision can be trained to authenticate the source’s legitimacy. By training the system to instantly discern the precise layouts and color schemes of authorized login pages, it can promptly block fraudulent imitations that diverge from the stringent criteria.
Similarly, natural language processing tools provide valuable context to recognize distinctive phrasing, accents, and other linguistic nuances. Natural language processing can also be used to assess prior correspondences from various sources to ascertain user familiarity with the senders. Any deviation from a predefined structure can trigger alerts indicative of a potential Business Email Compromise (BEC) attack.
In the midst of this uncharted cyber frontier, organizations are compelled to implement robust security protocols, allocate resources judiciously, and assess their cybersecurity preparedness. After all, survival is a privilege reserved for those best adapted to their environment.
Apu Pavithran is the founder and CEO of Hexnode, the award-winning Unified Endpoint Management (UEM) platform. Hexnode helps businesses manage mobile, desktop and workplace IoT devices from a single place. Recognized in the IT management community as a consultant, speaker and thought leader, Apu has been a strong advocate for IT governance and Information security management. He is passionate about entrepreneurship and devotes a substantial amount of time to working with startups and encouraging aspiring entrepreneurs. He also finds time from his busy schedule to contribute articles and insights on topics he strongly feels about.
Leave a Reply
You must be logged in to post a comment.