White Paper

Whitepaper: A Playbook for Modernising Security Operations

Our whitepaper acts as a compass for modernising your security operations, offering actionable insights on shaping your next generations CSOC.

Learn More
BOOK A MEETING

Enterprise AI Adoption: What Risks to Be Aware Of

The integration of AI is on the rise as businesses incorporate it into their processes, resulting in heightened efficiency within the workplace. While harnessing AI undoubtedly serves as a powerful support tool for organisations, it is crucial for the C-Suite to be aware of the potential risks associated with enterprise AI adoption.

Explore our blog, where we explore key factors that enterprises should carefully assess before adopting artificial intelligence.

What is Enterprise AI Adoption?

Enterprise AI adoption is the process of businesses integrating AI technologies into their business processes. As AI models become more and more powerful, businesses are hoping to save both time and costs by adopting artificial intelligence across their organisations.

Risks of AI Adoption

We’re going to cover the following five risks of adopting artificial intelligence:

  1. AI and Data
  2. How Accurate Is the Output From AI Tools?
  3. The Potential Risk of Jailbreaking
  4. Is Plagiarism or Copyright Infringement an Issue?
  5. Is the LLM Going to Be Available Long Term?

1. AI and Data Risk

In order to get use out of your LLM, you have to put data into it or point the tool at your organisations data. For example Microsoft CoPilot will suck in data from your Microsoft tenant and help create all kinds of interesting slide decks and data insights.

However this poses a significant risk of employees gaining exposure to sensitive data they shouldn’t have access to. This demonstrates that having strong data governance prior to rolling out generative AI is critical.

Most LLMs say your data won’t be shared but they may use your data as input or feedback to improve their performance or to generate new outputs for other users. Therefore, it is important to be aware of the data protection and privacy policies of the LLMs your organisation uses, and to ensure that your companies data is not being used for unauthorised or harmful purposes. Last year, Samsung banned its employees from using Generative AI tools after their data was accidentally leaked via ChatGPT1.

hand reaching out to robot

2. How Accurate is the Output from AI Tools?

LLMs are trained on large amounts of data, which may contain inaccuracies, biases, or outdated information. Moreover, LLMs have been known to hallucinate or generate false or misleading information based on their internal patterns and biases.

Therefore, it is essential to verify and cross-check the information generated by LLMs, and not to rely on them blindly. ChatGPT was originally trained on data up to the date September 2021 and initially had no knowledge of anything that happened after that. While that has now been updated, it will be an important factor to ensure that the data output is relevant.

Further, LLMs are trained on data that may not reflect the current state of affairs, especially in fast-changing domains such as politics, science, or technology. Therefore, it is important to update the data used by LLMs regularly, and to supplement their outputs with the latest information from reliable sources.

3. The Potential Risk of Jailbreaking

Another potential concern is “Jailbreaking” which is a form of hacking in which the attacker manages to get the AI model to generate output for which it wasn’t designed. In January 2024, parcel delivery firm, DPD, was forced to disable the AI component of their online ChatBot when a frustrated customer managed to get the tool to write a derogatory poem about DPD’s customer service.2.

4. Is Plagiarism or Copyright Infringement an Issue

LLMs may generate text that is similar or identical to existing sources, without proper attribution or citation. Depending on the usage of the output, this might be a challenge when it comes to claims of plagiarism. Therefore, it is important to use plagiarism detection tools and/or to find a way to cite the sources used by LLMs. Your brand could be severely impacted by a proven claim of plagiarism.

5. Is the LLM Going to Be Available Long term?

We live in a high paced technology-first world. Companies come and go and not every company who has produced an LLM is going to be there for the long haul. We will find that some LLMs (ChatGPT and Bing Chat for example) will take a prominent position and remain in the general public’s consciousness. Others might become specialist in particular subjects.

Keep in mind that your business’ usage of that tool is dependent on external factors, such as the availability of data, investment in the tool, technical support etc. We will quickly get to the stage where GenAI tools become business critical and will need to be built into business continuity planning.

We are only at the beginning of utilising GenAI tools and the opportunity is enormous, but we must be aware of some of the potential risks towards organisation’s cyber-security posture. One of the greatest risks of AI integration is implementation without having your data in order prior.

Speak to our expert team on how to get the right data foundations in place before you set off on implementing AI.

Find out how we can help your organisation with AI integration securely below:

Blog Author: Des Ryan, CWSI Group COO

FAQs

What Does LLM Stand For in AI?

LLM stands for Large Language Model. This is a type of artificial intelligence (AI) algorithm that uses deep learning techniques.

Which Industry Is Adopting AI the Fastest?

As of January 2024, reports stated that the technology, automotive, and aerospace industries have adopted artificial intelligence the fastest.

  1. Samsung Bans ChatGPT, Google Bard, Other Generative AI Use by Staff After Leak – Bloomberg ↩︎
  2. UK parcel firm disables AI after poetic bot goes rogue | Reuters ↩︎

Relevant Resources

White Paper

Whitepaper: A Playbook for Modernising Security Operations

Our whitepaper acts as a compass for modernising your security operations, offering actionable insights on shaping your next generations CSOC.

Learn More

White Paper

The Directors Guide to NIS2

Read our NIS2 Directors Guide, designed to highlight the senior management consequences of non-compliance and provide you with pivotal questions to access your compliance status.

Learn More

Our Voice

Advancements Within a Cyber Security Operations Centre 

Read our blog which delves into the shifting landscape of CSOC security, offering insights into upcoming trends to keep you well-prepared for the year ahead

Learn More