ChatGPT's cybersecurity implications: The good, the bad and the ugly

Created primarily for conversational use, ChatGPT’s versatility has made it an asset in multiple domains, including cybersecurity.

Like any technology, ChatGPT is a double-edged sword. In the wrong hands, the AI can perpetuate advanced cybercrimes and facilitate adversaries. (Photo: FAMILY STOCK/Adobe Stock)

OpenAI’s ChatGPT harnesses the power of generative artificial intelligence and promises to transform how we interact with computers and automate tasks. One of the most intriguing features of ChatGPT is its ability to converse like an actual human. It will generate a human-like response to any query or instruction. It can almost instantaneously draft an essay, conduct research, create a social media post, or even debug computer code. ChatGPT’s latest version even excels in some of the most challenging exams, like the Uniform Bar Exam and the SATs.

ChatGPT for cybersecurity

ChatGPT was primarily intended for conversational use, particularly for developing chatbots and virtual assistants that could interact with users in natural language. Its versatility and flexibility have made it a valuable asset for many domains. Experts are still in the early stages of identifying its application in cybersecurity. That said, there are several cybersecurity use cases ChatGPT can address straight away.

Hype and controversy are bound to follow such a powerful tool. Following is an analysis of the good, the bad, and the ugly side of ChatGPT, specifically from a cybersecurity perspective.

GPT is not specifically designed for code debugging, but developers have been using it to identify code errors and vulnerabilities. It can generate natural language explanations of code errors and suggest potential solutions already. With the right training data, it will be able to understand programming concepts and syntax just like experienced coders. In the future, ChatGPT is expected to evolve to analyze the code structure and identify logical loopholes in a program to avert security vulnerabilities.

ChatGPT can also be used to sift through large amounts of logs and other text data generated during a security incident to identify patterns and anomalies associated with an attack. It can instantly create natural language summaries of its findings to assist cybersecurity experts and forensic analysts in understanding the scope, timeline and nature of an attack for rapid remediation.

By assisting in tasks such as vulnerability discovery, forensic analysis and report generation, ChatGPT can potentially aid cybersecurity teams struggling with the persistent skills shortage. Security teams can use ChatGPT to automate certain processes, like analyzing large log files and creating executive reports, to focus entirely on tasks that require human analysis and expertise.

Software and cybersecurity tools and systems depend on correct configurations to work efficiently. Through carefully crafted instructions, also known as prompt engineering, ChatGPT can be trained to automatically configure servers, firewalls, intrusion prevention systems and other cybersecurity tools. It can be used to generate scripts for automating configurations in a secure and efficient manner.

ChatGPT for cybercrimes

Like any technology, ChatGPT is a double-edged sword. In the wrong hands, the AI can perpetuate advanced cybercrimes and facilitate adversaries.

ChatGPT can help cybercriminals create flawless phishing emails that can easily pass for being written by an authoritative human, like the CEO of a company. Attackers can create such emails in multiple languages while still coming off as native speakers. With some prompt engineering, attackers can also use ChatGPT to mimic the style and tone of specific, influential or high-ranking individuals.

Responses from ChatGPT are completely human-like, making it easier to advance and automate social engineering scams. AI-generated pictures, audio impersonation, and deep fake videos combined with a conversational bot proficient in natural language create the perfect grounds for bogus social media profiles and all kinds of successful social engineering attempts.

Anti-malware tools detect and block malware through code inspection and pattern matching with known malware signatures. Cybercriminals can input calculated queries to create different versions of a single malware on demand with minimal effort on their part. Such polymorphic malware will be hard to detect through traditional security tools. The cybercrime underground may also already have created ransomware programs using ChatGPT.

Threat actors can manipulate ChatGPT’s ability to debug code to hunt for security loopholes and vulnerabilities in applications and systems. Instead of going through thousands of lines of code, attackers can simply prompt ChatGPT to deconstruct it and discover potential flaws. Just recently, ChatGPT was used to identify vulnerabilities in smart contracts.

Cyber propaganda and ethical dilemmas: The ugly side of ChatGPT

In the hands of nation-state actors, propaganda agents, hate groups, cybercrime syndicates, and hacktivists, ChatGPT can have a far wider and much deeper impact on society as a whole.

State-sponsored cyber actors and hate groups can leverage ChatGPT to spread conspiracies and propaganda much faster and at a wider scale than traditional bots. With its conversational style and vast pool of knowledge, ChatGPT-powered bots can make logical and convincing arguments on social media channels to promote certain values and agendas. It can even impersonate influential personalities to attract attention.

Just like any AI-based tool, ChatGPT entirely depends on the data it is fed. The accuracy and integrity of the training data determine the quality of ChatGPT’s responses. Sometimes, AI algorithms can feed on subconscious human biases. Even worse, malicious actors can intentionally corrupt AI training data to insert blatant biases and generate false or offensive responses.

If ChatGPT’s future versions start storing and collecting users’ queries and the information they share, there could be a huge privacy and security risk. Malicious actors can steal and manipulate confidential or sensitive data for nefarious purposes.

ChatGPT has its good and bad sides, just like any other powerful technology. It has the potential to boost cyber defenses and augment security teams. On the other hand, it is equally capable of being manipulated by cybercriminals and nation-state actors. However, at its core, it is still a conversational chatbot that was never designed for cybersecurity or cybercrimes.

Combined with human expertise and comprehensive security architectures like next-generation SASE (secure access service edge), ChatGPT can become an asset for organizations struggling to keep up with the AI-powered security and threat landscape. The key is embracing emerging technologies, tools and techniques instead of shunning them.            

Etay Maor is the senior director of security strategy for Cato Networks. Previously, Maor was the chief security officer for IntSights and held senior security positions at IBM and RSA Security’s Cyber Threats Research Labs. An adjunct professor at Boston College, he holds a BA in computer science and a MA in counter-terrorism and cyber terrorism from Reichman University (IDC Herzliya), Tel Aviv. Contact him at Etay.Maor@catonetworks.com.

Related:

Cybersecurity is headed for a subtle but important evolution

Top 10 Risks for PC insurers in 2023

Are you aware of these social engineering tactics?

3 reasons why humans are the strongest defense against phishing attacks

5 factors contributing to company cyber risks