ePrivacy and GPDR Cookie Consent by Cookie Consent

Can ChatGPT revolutionise cybersecurity - for good and bad?

e92plus
January 2023

by Colin Brown

AI tools are getting more commonly used, but Neil Langridge, Marketing and Alliances Director at e92plus, wonders if that’s a positive.

Artificial Intelligence, and the launch of ChatGPT, has initiated a big discussion on how technology can be used actively by everyone as a source of information and insights. Along with Machine Learning, AI represents both the answer to so many cybersecurity challenges, and the dystopian future presented by all good apocalyptic sci-fi films and books, as Skynet takes on the human race.

Beyond the inclusion of AI in many vendor slide decks as a way of positioning next-generation technology and a more advanced, automated approach to technology, it’s now becoming increasingly mainstream as a concept. At the end of 2022, it crashed through into public awareness in a whole new way with the launch of ChatGPT, part of the OpenAI platform that allows for conversational dialogue to answer questions, engage in dialogue and provide detailed responses.

This step up from a glorified Google query has fired the imagination, with the awareness of students looking to fast-track essay responses without the frustrating need to actually read source materials at the forefront, followed by teachers looking to similarly automate marking. Anyone who has suffered at the hands of a website chatbot when trying to get help or an answer to a vaguely complex question also saw hope in the chance of a more positive result when talking to a computer. Without doubt, it’s opened up a world of potential for AI to impact how we engage with technology on a day to day basis as individuals, rather than just a mysterious black box powering systems from weather predictions to space rockets. 

Inevitably, the potential impact on cybersecurity also quickly become a key conversational topic – from both sides. For attackers, there was instantly the chance to transform basic, often childlike phishing text into more professional structures, along with the chance to automate engagement with multiple potential victims trying to find their way out of the ransomware trap they’ve fallen into. For defenders, could this also offer the chance to revolutionised processes, such as code verification or improving phishing education. It’s at a very early stage, and certainly has a number of flaws (not least the way you game the system to produce malicious responses despite the safeguards built in), but has broadened the debate as to how AI can change the cybersecurity industry. 

As a new tool, ChatGPT is fun to play with – and as Dave Barnett, Head of SASE and email security EMEA at Cloudflare, comments “it’s a frighteningly good system - anyone who has asked ChatGPL to explain some obscure technical concept in the form of a poem will realise just how powerful it is”. 

However, Dave highlights the serious concerns that have arisen. “I think the information security establishment needs to take a bit of time to consider the implications – firstly, that it used to be pretty easy to understand when we are being fooled 419 scams, delivery payment SMSs, or business email compromise as they all tend to appear fake to humans. Could synthetic intelligence fool us? We also need to be very careful about the data protection risks – such as where the data goes, who the controller and processor is. Humans are naturally inquisitive, so if we start talking to computers as if they are human we are highly likely to share information we probably shouldn't. Finally, could this be a way to address the skill shortage in IT? If OpenAI can even write code in a long forgotten language it will probably be of great help”. 

For Ronnie Tokazowski, Principal Threat Advisor at Cofense, the chance to be creative with the platform was to be enjoyed, as “creating AI-generated rap lyrics about world peace and UFO disclosures is fun and cheeky, however it is possible to trick the AI into giving the information that you’re looking for”. 

Ensuring there are safeguards is essential with any application build, and security by design for AI compared to just a post- build security wrapper will always be the preferred option. That’s clearly the aim of the developers, as “the AI’s intent is good and doesn’t want to create phishing simulations”, but asking in multiple ways (such as removing the word phish) still fails to generate a positive response. But being creative did deliver results: 

“ChatGPT even returns education about how to stay protected and to verify before using a gift card as a form of payment”. 

Ronnie concludes that “it all comes down to intent and how it’s used, as anything can be used for malintent. To share some happy thoughts in an email about the negative impacts of AI, I’m just going to ask the AI to help me close this email out”.

And with ChatGPT’s perspective…”I personally have to agree with it’s sentiment. Emotionally and as a human, frankly I have no idea how I feel agreeing with a robot”

All of the responses above that were provided are good, but generic and with little of the complexity and nuance we would expect from a human writer. But that’s a key challenge – it’s a blunt tool for defenders, but for attackers that’s often all they need. Automation at scale can help them, and simple, corporate-style writing can easily help them in attempting to produce content that’s looking to impersonate the dull, generic password reset or delivery emails that we all receive, and they want to replicate. 

Ultimately, AI will be used as a weapon by bad actors and cyber criminals, and organisations need to know how to defend against it – as well as leveraging the potential benefits, whether it’s AI or ML built into commercial products, or using the power of tools like ChatGPT in user education, securing code or better understanding the threat. 

But as I asked the tool for how businesses can defend against AI, I received typically generic responses (highlighting its use as a limited search engine) it made one key point- “Use AI responsibly”. That’s the biggest takeaway we all need to remember.

This article was first published on MicroScope on 9th January 2023.