In recent news, a high-stakes legal battle has unfolded between two technology giants—Rippling and Deel. Rippling has accused Deel of orchestrating corporate espionage by planting a mole within their organisation, accessing confidential customer data, and spying on internal systems. The case underscores a growing issue many organisations face: insider risks. Whether intentional or accidental, insider threats pose a significant danger to businesses, and as we’ve seen, the consequences can be far-reaching.
In this blog, Paul Conaty, Secure Data Practice Lead at CWSI, explores the growing risks posed by insider threats, the role of AI in exacerbating these challenges, and the essential controls businesses must implement to protect their sensitive data and maintain a competitive edge.
The Insider Risk Problem
Insider risks, including corporate espionage, data theft, and sabotage, often come from employees, contractors, or partners who have legitimate access to an organisation’s sensitive data and systems. In the case of Rippling vs. Deel, the alleged mole gained access to proprietary information about pricing strategies, client relationships, and competitive sales tactics. This kind of insider threat can severely impact a company’s competitive edge and damage its reputation.

However, a new layer of complexity is emerging in the world of corporate espionage and insider risks. As Artificial Intelligence (AI) tools, such as Microsoft Copilot and ChatGPT, become increasingly integrated into workplace environments, they can potentially exacerbate these risks. AI systems can automate tasks, summarise information, and surface sensitive data with incredible speed and accuracy—but only if the data is not properly protected.
The Role of AI in Exacerbating Insider Risks
While AI tools promise increased productivity and efficiency, they can also pose significant risks if not properly controlled. AI can inadvertently expose sensitive data by making it easier for employees or malicious insiders to surface confidential information. For instance, tools such as Microsoft Copilot or ChatGPT can access vast amounts of organisational data and, depending on how they are configured, might generate reports or insights that unintentionally reveal proprietary, sensitive, or classified information.
A simple prompt to a generative AI tool asking for a summary of recent sales data could expose competitive pricing strategies, customer negotiations, or confidential meeting notes if proper data governance measures aren’t in place. If AI systems have access to broader databases and lack proper data protections, they could easily retrieve, present, or even share confidential information without realising the risks involved.
For example, in the context of the Rippling vs. Deel case, imagine an insider using a generative AI tool to help summarise competitive intelligence, pulling from a variety of internal documents, chat logs, and emails. Without adequate safeguards, AI could inadvertently surface sensitive information, putting the company’s competitive advantage at risk.
Why Insider Risk Controls Are Crucial
With the rise of remote work, digital transformation, and AI-powered tools, the attack surface for insider threats has expanded dramatically. Employees are no longer confined to corporate offices, and data is accessed from multiple devices, platforms, and AI systems. This shift makes detecting and preventing insider threats more complex but no less critical. The unfortunate reality is that even a single rogue employee or contractor can cause immense harm—just as a mole can compromise a company’s valuable trade secrets.

Microsoft Purview: A Key Solution for Insider Risk Mitigation
One effective way to mitigate the risks posed by insider threats—particularly in an environment increasingly influenced by AI—is to implement comprehensive insider risk management solutions. Microsoft Purview Insider Risk Management and Communication Compliance controls provide powerful tools to help organisations detect, investigate, and prevent harmful insider activities before they escalate. These tools are specifically designed to help mitigate both human-driven and AI-driven risks.
Here’s how these tools can help:
- Proactive Monitoring with AI Integration: Microsoft Purview Insider Risk Management helps detect potential risks based on patterns of behaviour, both human and AI-driven, that could indicate malicious intent. For example, the system can flag when AI tools like Copilot or ChatGPT are queried in ways that could expose sensitive data. It can also monitor for actions like unauthorised data access, exfiltration attempts, or unusual system searches, similar to the behaviours seen in the Rippling vs. Deel case.
- Behavioural Analytics for AI and Human Activity: Microsoft Purview uses machine learning to identify both human and AI-driven anomalies in real time. If an employee queries AI tools for sensitive data or tries to share confidential information, Purview can flag these activities as potentially risky. This helps prevent the unintended exposure or oversharing of sensitive data through AI tools that might be unaware of the context.
- Communication Compliance to Manage AI Usage: Purview Communication Compliance allows companies to monitor internal communications and the use of AI tools, such as Copilot or ChatGPT, within enterprise systems. This is especially useful for detecting covert communications between insiders and external parties or AI-generated outputs that might inadvertently share sensitive data. For example, AI tools that generate documents or emails can be monitored to ensure that they are not inadvertently leaking proprietary data or violating compliance rules.
- Automated Response to AI-Driven Risks: Microsoft Purview can trigger automated workflows to isolate access or halt the use of certain AI tools when risky behaviour is detected. If an AI tool surfaces confidential data in a way that violates company policies, the system can immediately take corrective action, such as restricting access to the tool or alerting security teams to investigate further.
Tips and Best Practices for Security and Compliance Professionals
- Implement AI-Specific Data Protection Policies: With AI tools gaining prevalence, it’s critical to establish clear guidelines around what data can and cannot be shared with AI systems. Leverage Microsoft Purview to create policies that restrict AI tools from accessing highly sensitive data or automatically flag potentially risky queries.
- Monitor AI-Generated Outputs for Oversharing: Set up systems to automatically review AI-generated content for any signs of oversharing. Whether it’s a document generated by Copilot or a chat conversation with a virtual assistant like ChatGPT, use tools like Microsoft Purview to scan and block the sharing of confidential or sensitive information.
- Enforce Least Privilege for AI Tools: Just as you would with human users, apply the principle of least privilege to AI tools. Limit their access to only the data and systems necessary for their tasks. By preventing AI tools from accessing unnecessary data, you reduce the risk of inadvertently exposing sensitive information.
- Train Employees and Contractors on AI Risks: Ensure that all employees and contractors understand the potential risks of using AI tools in the workplace. They should be aware of what data is sensitive and how to use AI responsibly. Encourage them to follow security best practices when interacting with AI systems.
- Use AI-Specific Monitoring and Alerts: Leverage the AI-specific monitoring capabilities in Microsoft Purview to detect unusual behaviour and prevent the misuse of AI tools. Set up alerts for potentially dangerous actions, such as AI querying for unauthorised data or generating reports that could inadvertently share confidential information.
- Regularly Audit AI Systems and Communication Channels: Conduct regular audits of AI system activities and communication compliance to ensure that AI tools are not exposing sensitive information. Use Purview’s auditing features to review how data is being accessed, used, and shared within AI systems, and adjust policies as needed.
Conclusion
As the case between Rippling and Deel demonstrates, insider risks like corporate espionage can have devastating effects on an organisation. With the rise of AI tools that can surface and share data quickly, the potential for accidental or malicious data exposure has grown. However, with robust insider risk management solutions, including those provided by Microsoft Purview, companies can proactively monitor for suspicious activities, manage AI usage, and prevent the unintended sharing of sensitive data.
By integrating these technologies with clear policies, continuous monitoring, and employee education, security and compliance professionals can build a strong defence against both human and AI-driven insider threats. While it’s impossible to eliminate all risks, taking these proactive steps can significantly reduce the likelihood of an insider breach, helping your organisation stay secure in an increasingly AI-driven world.
How CWSI Can Help
As a Microsoft Solutions Partner and proud member of the Microsoft Intelligent Security Association (MISA), CWSI brings deep expertise in mitigating internal security risks and ensuring the secure adoption of AI. With a Microsoft Specialisation in Information Protection and Governance, we help businesses safeguard critical data and maintain regulatory compliance.
Contact us today for a no-obligation consultation and discover how we can enhance your security posture.
Author: Paul Conaty
Paul Conaty is Secure Data Practice Lead at CWSI, one of Europe’s leading mobile and cloud security specialists. With over 20 years of experience across engineering, technical, and management roles, he provides strategic and tactical security guidance to organisations in Ireland and globally.
A recognised thought leader in cybersecurity, governance, and compliance, Paul advises public and private sector organisations on mitigating cyber risks. He offers practical strategies for protecting company data, from identifying vulnerabilities and strengthening IT infrastructure to implementing Multi-Factor Authentication and training employees on phishing threats. He is also an Ambassador for the GDPR Awareness Coalition, advocating for greater awareness of data privacy obligations under GDPR.
