The Double-Edged Sword of AI in Cyber Security: The Attacker and the Defender

As with any new technology, AI in cyber security is a double-edged sword — It presents both opportunities and risks. It’s an interesting and worrying scenario. On one hand, it has become a productivity tool helping everyone create content, write code, summarise meetings, analyse data and do everything faster than before.

But with this efficiency comes a number of risks.

The risk of data exposure

Whenever you use an AI tool, you’re inevitably sharing data with a third party. This is often stored on servers, processed, and potentially available for future use. If those servers are compromised, your data—no matter how sensitive—could be exposed.

Earlier this year, this is exactly what happened, when OpenAI suspected potential hacks exposing some sensitive user data.

Whether you’re using tools like ChatGPT or Microsoft Copilot, there’s always an inherent risk. Even with Microsoft’s robust security, data sharing increases the likelihood of vulnerabilities.

Then there’s the risk of adopting AI without the right guardrails in place. Without proper access controls and sensitivity labels, anyone can prompt and retrieve confidential information from an AI tool, like Copilot, including financial data, salary details, personal information and more.

Without safety measures, AI won’t know who should see what and can expose information to people who shouldn’t have access.

AI-powered phishing: Smarter, slicker and more dangerous

Another terrifying possibility is of cyber criminals utilising AI – well – to make their jobs easier too.

The use of AI in hacking attempts is not new. State-sponsored threat actors and advanced cybercriminals have been using AI to create malware and to support their ransomware attacks for some time now.

But Generative AI (Gen AI) has democratised it to an extent that it’s possible for novice threat actors to vastly improve their game, as the National Cyber Security Centre (NCSC) points out.

The days when you get a phishing email with bad grammar and spelling are probably over. With Large Language Models (LLMs) and GenAI, hackers now have access to systems that help them write better and more convincing emails, that are harder to detect for anyone, no matter your cybersecurity proficiency level.

Not just that, with AI, hackers also can use publicly available information to craft highly personalised messages targeting individuals faster.

This lowers the barrier to entry for criminals with limited resources, increasing the volume and success rates of phishing attacks.

In fact, research from Harvard Business Review found that 60% of participants fell victim to AI-generated phishing emails.

Even though this is comparable to non-AI phishing emails, what’s worrying is how the entire process can be automated using LLMs, reducing the cost of phishing attacks by 95%. Because of this, experts expect phishing to increase drastically in quality and quantity over the coming years.

Growing concerns: AI-generated malware and deep fakes

There’s also a rising concern about cybercriminals potentially creating malware using AI that could evade detection by current security filters. 

While apps like ChatGPT have placed some restrictions to prevent misuse, many more avenues are opening up for fraudsters to use LLMs unethically without safety measures, such as WormGPT and FraudGPT.

Add to this the use of deep fakes and the life of security teams is only getting harder.

Back in 2019, hackers imitated the voice of a CEO to fraudulently transfer £220,000 from a UK-based energy firm. With the advancements in AI, impersonation with fake AI voices and videos is only getting sophisticated, making it increasingly hard to figure out what’s real and what’s not.  

Fighting AI with AI: What’s the solution?

Security has always been a cat-and-mouse game. Hacker exploits vulnerabilities in our systems and we work to patch them. They find a way to break down your door and you get a bigger lock. In this game, the cyber world has become more proactive over time. We now figure out a way to stop these hackers before they have a chance to get to us.

As attacks get sophisticated, security teams will have to look for new ways to detect and remediate threats faster to counter them.

Many security tools have already started to integrate AI, such as Copilot in Microsoft Defender, to enrich its capabilities, detect threats faster, reduce false positives and spot unusual activities that humans tend to miss otherwise.

For example, Microsoft Copilot for Security can summarise the key events of an incident, provide resolution steps confirming if the incident is a true positive, analyse malware script and take remediation actions all in a few minutes.

IBM’s Cost of a Data Breach Report 2024 found that organisations that are using AI and automation in their security tools incur on average £1.06 million lesser breach costs compared to those that aren’t, so AI-powered security pays off.

So, is AI in cyber security the ultimate solution?

Not entirely. The key to cyber resilience lies in increasing awareness among employees and instilling a culture of vigilance in your organisation. Security frameworks like Zero Trust, where nothing and no one is trusted by default, can help. The mantra should always be to verify first.

Employees need to be trained to double-check and independently verify communications, even if they seem legitimate before acting on it.

Cybersecurity experts at CloudClevr say continuous innovation is the way ahead.

The AI landscape is evolving so much that it’s an ever-growing challenge for organisations to maintain their cyber resilience. Businesses will need to find more secure ways of verifying their clients and suppliers as the threat actors become more sophisticated.

Multi-factor Authentication and Dual Authorisation for changes in important information, such as bank account details, are steps in the right direction. At the end of the day, continuous vigilance and innovation will be key to staying ahead of these threats.

For instance, one of our clients recently fell into a social engineering scam where threat actors sent requests to make changes to their banking account information.

Had there been procedures in place to verify the authenticity of the request, such as contacting the bank directly, the scam could have been prevented.

Ultimately, while AI advancements will continue to transform both attacks and defences in security, it’s a mindset change—alongside a culture of careful, thoughtful verification—that will protect us in this new era.

Laptop showing Clevr360 dashboard
Try Clevr360 for Free

Clevr360 consolidates and enhances data from leading cloud vendors all in one place, giving you a single view of your entire technology estate and better control over your cloud IT solutions. 

Subscribe

Stay Ahead in Cloud, Communications & IT! Subscribe for the Latest Insights, News, and Exclusive Updates from CloudClevr.

Free Cyber Security assessment

Let's get things started

Fill out the form below and we will be in touch for your free assessment.

To qualify for a free trial of Clevr360, please submit your enquiry using a valid work email address and ensure you are based in the UK. We reserve the right to review, delay, or decline any request at our discretion.

Request an AI Readiness Review -
Powered by Clevr360

Discover how AI-Ready your IT estate is and get clarity across Microsoft 365 and key IT systems so you can adopt AI safely, optimise spend, and boost productivity.

GET A QUOTE

Get a tailored quote for ClevrOffice

ClevrOffice gives you everything your team needs to work — securely, seamlessly, and without the usual IT drama.

DISCOVERY SESSION

Speak to our team

Fill out the form below and account manager will be in touch

BOOK A DEMO

Discover Clevr360

Fill out the form below and we will be in touch with the next steps.