How Generative AI can both harm and heal cybersecurity


OpenAI’s latest iteration of its renowned generative AI platform, GPT-4o, has reached new levels of sophistication. However, amid our admiration for its capabilities, hackers are likely exploring avenues to exploit it for malicious ends. Researchers studying its predecessor, GPT-4, uncovered its ability to exploit 87 percent of one-day vulnerabilities.

These vulnerabilities, where fixes are available but not yet applied by sysadmins, present prime opportunities for hackers to infiltrate systems. Alarmingly, GPT-4 demonstrates the capability to exploit such vulnerabilities autonomously. Although real-world instances of GenAI being used maliciously have yet to surface, it has already posed challenges for cybersecurity experts.

Sharef Hlal, Group-IB’s head of digital risk protection analytics team for the Middle East and Africa, notes that cybercriminals have weaponized GenAI. He underscores the dual nature of generative AI in cybersecurity, recognizing its remarkable potential but also its susceptibility to misuse.

Mike Isbitski, director of cybersecurity strategy at Sysdig, concurs, highlighting GenAI’s role in exacerbating security threats. He explains that the homogeneous nature of the cloud landscape enables attackers to automate various processes, facilitating reconnaissance and actual attacks.

Additionally, Hlal observes that scammers leverage AI advancements to enhance fraudulent activities, citing compromised ChatGPT credentials on the dark web. Social engineering, particularly in email phishing campaigns and deep fakes, has become more sophisticated with the aid of GenAI.

Despite these challenges, Isbitski remains optimistic about leveraging GenAI for defensive purposes. He identifies system hardening and risk contextualization as key areas where GenAI can assist security professionals.

Hlal sees AI as a pivotal development in cybersecurity, enhancing defense mechanisms by augmenting human expertise. However, he stresses the importance of responsible usage and ethical implementation, urging a holistic approach to address AI’s impact on security.

In conclusion, while GenAI presents opportunities for societal betterment, it also demands vigilant oversight to prevent its exploitation for nefarious activities. By prioritizing responsible usage and ethical practices, we can harness the potential of AI for positive outcomes in cybersecurity and beyond.