Cybersecurity’s Code Red: AI Is Fueling the Fire
- By Winston Thomas
- October 28, 2024
What happens when the very technology we rely on for defense becomes the weapon of choice for our adversaries? That’s the current issue with AI.
“AI is being used by threat actors in multiple disciplines,” warns Morey Haber, author of security publications and chief security advisor at BeyondTrust. “It can be as simple as using AI to craft highly convincing phishing emails in multiple languages, making them harder to spot than ever before.”
AI’s havoc on cybersecurity goes beyond phishing. It is now being used for malware obfuscation, increasing its chances of slipping past traditional security tools trained on detecting malicious code. Additionally, AI’s ability to generate sophisticated attack scripts and automate vulnerability discovery means that even newbie attackers can be potent.
AI shifts the attackers’ focus
While we all agree that AI is a double-edged sword with very sharp edges, it also demands that we change how we approach cybersecurity.
Take developers, for example. As the world becomes software-centric, developers have keys to critical systems. They can even touch infrastructure through Infrastructure as Code (IaC). Yet, in many companies, they are less monitored.
This has made developers a favorite target of attackers. “Most companies do not recognize the threats developers pose to the business today,” says Haber.
“The principles of least privilege have to be honored or actually enforced on developers,” he advises.
However, enforcing the least privilege on developers can be challenging, as traditional development workflows often require administrative rights. This creates a significant security gap that attackers can exploit.
The ghost in the machine
Another often overlooked attack vector is identity security. “The most overlooked book is identity security,” Haber notes, referring to his own publications on the various attack vectors. In particular, he is calling out the laissez-faire attitude towards dormant accounts.
Dormant accounts, often forgotten or overlooked in the hustle and bustle of IT management, can become easy targets for attackers. These “ghost accounts” can linger in systems, providing a backdoor for malicious actors to gain access and wreak havoc.
Haber calls for companies to improve their security hygiene. He also believes they need to enforce processes and policies for companies working with third-party companies or developers and be very strict about identity security.
“It’s that one little straggler out there that tends to burn the organization,” he points out.
The AI arms race: A call to action
Using AI in cyberattacks creates an arms race, with defenders and attackers constantly trying to outmaneuver each other.
To stay ahead of the curve, CISOs must adopt a proactive approach, embracing AI-powered security tools while also addressing the unique blind spots created by AI.
“Businesses today have got to get back to basics,” Haber advises. “Then, ransomware gets minimized.”
Beyond the basics, organizations must also prioritize:
- AI-powered threat detection and response: Leverage AI to identify and respond to threats in real-time, enhancing your security posture.
- Zero Trust security framework: Implement a Zero Trust model to limit access to sensitive data and systems, reducing the impact of compromised accounts.
- Continuous security awareness training: Educate employees about the latest AI-powered threats, including sophisticated phishing attacks and deepfakes.
- Collaboration and threat intelligence sharing: Share threat intelligence with other organizations and participate in industry initiatives to stay informed about emerging threats.
The future of cybersecurity is inextricably linked with AI. By understanding the challenges and opportunities presented by AI, CISOs can better protect their organizations from the next generation of attacks.
Navigating the AI bubble
While AI offers tremendous potential for cybersecurity, it’s essential to distinguish between hype and reality.
“The term AI is often misused,” Haber cautions. “I do believe there’s an AI bubble. I do believe it is going to pop in the very near future.”
CISOs must carefully evaluate AI-powered security solutions, ensuring they deliver real value and address specific security needs.
Meanwhile, Haber believes humans will remain vital for cybersecurity in an AI era. “So, you still have to train, train, train,” Haber emphasizes. “Education and training remain key.”
Companies must invest in continuous security awareness training to educate employees about the latest threats and best practices. This includes training on identifying sophisticated phishing attacks, deepfakes, and other AI-powered threats that have rewritten training materials created only five years ago.
Offense as a good defense
The use of AI in cyberattacks blurs the lines between traditional cybercrime and cyber warfare. Nation-states increasingly use AI-powered tools to conduct espionage, disrupt critical infrastructure, and spread disinformation.
Being passive is not good enough these days. “Offensive cyberattack strategies are quite good,” Haber acknowledges, “but if you attack the wrong place or hit the wrong IP address, you might take down a hospital.”
Ultimately, it is not enough for companies to fight this battle alone. Even with AI as an ally, the cybersecurity landscape now favors the attackers as they continue to optimize, personalize and scale their attacks with algorithms. The answer lies in collaboration.
Haber calls for the international community, especially the enforcement agencies and regulators, to establish norms for using AI in cyber warfare to prevent catastrophic consequences.
Image credit: iStockphoto/Artfoliophoto
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.