Cybercriminals are supercharging their attacks with the help of large language models such as ChatGPT, and security experts warn that they've only scratched the surface of artificial intelligence's threat-acceleration potential.

At last month's RSA Conference, cybersecurity expert Mikko Hyppönen sounded the alarm that AI tools, long used to help bolster corporate security defenses, are now capable of doing real harm. "We are now actually starting to see attacks using large language models," he said.

In an interview with Information Security Media Group, Hyppönen recounted an email he received from a malware writer boasting that he'd created a "completely new virus" using OpenAI's GPT that can create computer code from instructions written in English.

Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader

Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:

  • Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.

Maria Dinzeo

Maria Dinzeo is a San Francisco-based journalist covering the intersection of technology and the law, with a focus on AI, privacy and cybersecurity.