We repeatedly hear stories around personal and corporate data being leaked on the web through the un-sanctioned use of consumer grade AI tools like Grok, ChatGPT, and others. A report last week revealed how data from a popular AI App “Ask AI” leaked huge amounts of personal data from people’s phones.
Last year, another incident involving X’s Grok – with hundreds of thousands of Grok chats exposed in Google results.
These were not the result of sophisticated cyberattacks or hacks either – instead it highlights a risk that most organisations face. It’s a real‑world example of what happens when your organisation allows employees to use free consumer AI tools (and some paid ones too) that sit completely outside your organisations governance and control – it shows the real threat of “Shadow AI”.
Just think about the types of data we work with every day that AI is often used to review, summarise, create etc. Depending on the person, task, and your business, this could be:
- Patient or clinical information
- Personal data on children at school
- Client records and contracts
- Financial statements
- Internal strategy and board material
- Product IP and R&D
- HR and staff data
- Source code and architecture diagrams
If any of this is used or uploaded into an ungoverned or consumer AI tool, your business loses visibility, control, and legal defensibility. What’s worse, that data may be used to train their AI models (as stated in their T&Cs – which we of course do not read!).
Once it’s shared:
- You can’t enforce retention
- You can’t enforce deletion
- You can’t enforce access boundaries
- And you certainly can’t stop it being indexed, leaked, or scraped
It happened because the “free” consumer AI tools are not designed for commercial or enterprise‑grade security, governance, or compliance. These tools make their money (just like Google advertising) through how you use the tools and the data you share with them. When a consumer product is free – YOU (your users) are the product. These free consumer AI tools, digest, harvest and leverage the data shared with them to learn, train their models and sell to data brokers.
If you allow the use of consumer AI tools at work or for work, without Data Procurement Management in place – you are likely letting employees leak sensitive information to the masses.
How we stop Data Leakage from AI Tools
According to data from leading security vendors including Microsoft and Cisco 75% of employees are already using AI at work, often without approval, training, or understanding the risks. Shadow AI is both a security, trust and personnel behaviour problem, but fortunately the fix is much simpler than most organisations believe:
- Give people safe tools so they don’t reach for unsafe ones: If you don’t provide governed AI (like Copilot Chat), your employees will find their own.
- Govern the use of Shadow AI tools: You cannot secure what you cannot see. Discovery, monitoring, and policy enforcement are essential. Tools like Cisco AI Defense, Microsoft Purview etc can help with this.
- Define and enforce a central AI Usage Policy: This ensures clear rules, clear boundaries, and clear accountability.
For Microsoft 365 customers, enabling Microsoft Copilot Chat for everyone is a logical and sensible starting point:
- Secure by design
- Integrated into work flow
- Covered by your existing compliance and identity model
- Zero data retention outside your tenant with Enterprise Data Protection
- Included within your Microsoft 365 Subscription
Then look to layer tools like DSPM for AI in Microsoft Purview, apply sensitivity labels, and enforce AI‑aware DLP policies to ensure:
- Data stays where it should
- AI interactions respect classification
- Sensitive content is protected automatically
- You maintain full auditability and control
If you are not a Microsoft shop or don’t have the advanced E5 security licenses, then look at third party vendor tools, such as Cisco AI Defense which can help detect, control and govern the use of AI across your organisation – both for SaaS AI tools as well as ones built internally.
In short, part of deploying and enabling AI in your business is also about mitigating the use of Shadow-AI. It’s also about putting the right controls in place to protect sensitive data being shared with other tools.
This is what “enterprise AI” actually means: Governed. Integrated. Secure. Accountable.
