ChatGPT – What are the security risks?

Artificial Intelligence (AI) chatbots have captured the world’s interest. The release of ChatGPT in late 2022 and the ease of querying it provides, makes it one of the fastest growing consumer applications ever, and its popularity is leading many competitors to push out their own AI models, or to rapidly deploy those that they’ve been developing internally. As with any emerging technology, there’s always concern around what this means for security.

What is ChatGPT?

ChatGPT is an artificial intelligence chatbot developed by OpenAI, a tech firm based in the USA.  It’s based on, a language model released in 2020 that uses deep learning to produce human-like text. Like many forms of AI ChatGPT relies on vast amounts of text based data; this is gathered by trawling the internet for sources of information such as, research material, books, and social media content. With such large volumes of data, it is impossible to filter, and fact check for inaccurate or offensive content, which means a good deal of false and offensive content is included in the tool.

How does it work?

ChatGPT effectively allows users to ask the tool questions about any topic they choose, with the software providing answers, explanations, or suggestions. The program can provide example code for a python programming task or suggest song lyrics based on a band of your choosing or even weigh in on the climate crisis to name but a few examples.

Undoubtedly impressive with an ability to generate a huge range of convincing content in multiple human and computer languages.

However, these tools are not magic, and contain some serious flaws, including:

  • Getting things wrong and ‘hallucinating’ incorrect facts they can be biased, are often gullible (in responding to leading questions, for example)
  • To be efficient they require huge computer resources and vast data to train from scratch.
  • They can be coaxed into creating toxic content and repeating false information and are prone to ‘injection attacks’ – allowing the user to bypass previous instructions in a language model prompt and substitute it with a new one (essentially creating ‘fake news’)

Despite this tools like ChatGPT are largely useful and can provide helpful and detailed information. These tools are a victim of their own success with companies moving to develop similar versions which will undoubtedly be made available in the coming months. Like all sources of information on the internet, there is no substitute for a second opinion! Obtaining a second opinion is a good way to fact check and ensure the information is correct and provides further reference points when forming an academic argument or when writing a blog.

The NCSC has a set of guidelines around the use of tools like ChatGPT, these are centred around privacy and security of personal and company data. They recommend the following:

  • not to include personal, business related or sensitive information in queries.
  • not to submit queries containing information that would lead to issues were they made public.

get Clevr360

Speak to our team

Fill out the form below and we will be in touch with the next steps.