ePrivacy and GPDR Cookie Consent by Cookie Consent
Solutions
Artificial Intelligence (AI)

AI is dominating the cybersecurity market - both in terms of more advanced, sophisticated threats, as well as vendors are leveraging the rapidly evolving technology to fight the next generation of attacks.

At e92plus, we're bringing together some of the key vendors who are investing in AI technology, as well as being at the forefront of the fight where AI is used as a core part of produced more advanced phishing attacks, malware and expotentially increasing the scale of the assault on our networks, applications and users.

The big AI debate - the biggest talking point in cybersecurity.

We’re getting to heart of the topic with a panel debate that includes three thought leaders on AI in cybersecurity. It’s a theme that creates huge noise and headlines, but what’s the reality? How big is the threat? How can it help us against AI-driven threats?

Joining us for this episode are three leaders in the field: Deryck Mitchelson, Field CISO at Check Point, Carl Froggett, CISO at Deep Instinct, James Hickey, Director of Sales Engineering at Cofense, and our moderator, Neil Langridge from e92plus.

Our guests share their extensive experience in the banking and healthcare sectors, discuss the importance of guardrails in AI deployment, and provide insights into what the future might hold for AI in cybersecurity.

AI Now, with Microsoft 365 CoPilot
Microsoft 365 Copilot — the new AI-powered productivity tool — is transforming daily workflows. However, organizations must understand that it has access to the vast amount of data that users generate and access every day across Microsoft 365. Therefore, before Copilot deployment, it’s vital to address key security vulnerabilities like inadequate data access controls, excessive permissions and unidentified sensitive data, all of which increase your risk of data breaches and compliance issues.

What might the risks look like?
  • Improper permissions — Copilot relies on the Microsoft 365permissions already assigned, so if they’re not set up correctly, sharing sensitive information can quickly spiral out of control, resulting in data breaches and hefty compliance fines.
  • Inaccurate data classification — Copilot has inherits the sensitivity labels assigned to protect data. Again, data is at risk from inaccurate labels – and data classification is often inconsistent and incomplete. For example, manual labelling is highly prone to human errors, and Microsoft labelling technology is limited to specific types of files.
  • Copilot-generated content — The greatest challenge is for new documents, which doesn’t carry labels or categorisation. This means that new documents containing sensitive data could be shared with unauthorized users. Ensuring these documents are properly classified is a challenge, due to the sheer volume of content that Copilot can produce.
How artificial intelligence will power social engineering
Badly written phishing emails have always been good for a laugh – but they remain incredibly powerful and effective, demonstrated by their continued existence. And yet the quality, sophistication and personalisation has continued to improve in parallel to improved security, with the best often evading multiple layers of technology. That dangerous trend has been accelerated by AI – through greater automation, through improved translation into other languages (so foreign attackers aren’t let down by poor English, or where more countries now become targets without the need for a local speaker), deepfake technology improves and silly mistakes become eliminated.

The reverse side is, of course, the ability for AI to compliment and strengthen our defences. The sheer scale of phishing emails (estimated at 3.4 billion a day) means that being able to leverage AI means not just a quicker response when identifying patterns and suspicious activity, but able to using the scale of threat as a weapon by turning that into an intelligence source for the AI to learn from.

For cybersecurity teams, the solution is being able to combine AI with human intelligence. The sheer speed at which new threats arrive makes it impossible to rely on people or even patterns, behaviour or algorithms – 88% of new threats each year have never been before, and 50% of all attacks bypass legacy email gateways.

The threat of AI, however, is not just confirmed to the potential for damaging, more advanced malware (although that’s a big concern) – it’s also around privacy concerns and the use of data that could be even be used as part of legitimate processes and tools, and the interaction of users with AI – such as phishing emails.
From artificial intelligence to deep learning
The potential AI has undoubtedly gripped the public’s imagination, and that’s certainly reflected in many organisations as well – with 69% of organizations officially adopting generative AI tools. But in cybersecurity, that’s balanced with the risks that AI brings.

The pace of change is arguably the biggest challenge, with new tools being created by bad actors that leverage the AI advances and take the technology in new directions – as well as bringing unparalleled scale to their operations. This increasingly looks like meaning an AI arms race, with an increased focus from cybersecurity teams on prevention to avoid constantly being on the back foot.

Highlights from the recent survey of global CISOs include:
  • 69% of organizations have adopted generative AI tools as part of their daily practice.
  • 70% of security professionals say generative AI is positively impacting employee productivity and collaboration.
  • 63% state the technology has improved employee morale
All this means that organisations are looking at new ways to leverage AI, and get ahead of the evolving GenAI tools that bad actors are building. Deep Learning is the most advanced form of artificial intelligence. Basic machine learning (ML)-based tools either protect too much—slowing down the business and flooding your team with false positives—or lack the precision, speed, and scalability to predict and prevent unknown malware and zero-day threats before they have infiltrated your network.
The importance of building an AI cybersecurity strategy
Despite the incredible pace of growth and change with AI, it’s essential organisations build a strategy – for both defending against AI threats, and how to harness the power of AI in their own defences. Research from ESG highlights that improving basic hygiene, processes and training are the biggest priorities ahead of the most advanced tools that AI can support – and shows how AI can be used to replace more basic tasks, and that our own understanding and intelligence can continue to be where the greatest value lies for cybersecurity teams.

At present, many GenAI tools can call on a wealth of information and data, but activity is based on that – reproducing and analysing at scale, but tied to that historical context. This more linear approach than true AI makes it easier to build appropriate defences, especially when being able to collate threat intelligence from multiple sources (you can read more examples here.

One big opportunity around AI is the ability to develop bespoke applications, in particular Copilots. When designed for the cybersecurity solutions, they help automate processes and perform common admin tasks, so allowing staff to focus on more high value activities. For example, applying rules or signatures across multiple devices and locations, or deploying updates or policy changes, as well as using playbooks for automated responses in the event of an incident or alert.
Contact us today
We’d love to hear from you – and you can get in touch with us on the phone, in person or across many different digital channels.
Having problems? - please Get in touch