The Rise and Risks of Shadow AI

Matthew.Rosenquist
2 min readMay 24, 2024

--

Shadow AI, the internal use of AI tools and services without the enterprise oversight teams expressly knowing about it (ex. IT, legal, cybersecurity, compliance, and privacy teams, just to name a few), is becoming a problem!

Workers are flocking to use 3rd party AI services (ex. websites like ChatGPT) but also there are often savvy technologists who are importing models and building internal AI systems (it really is not that difficult) without telling the enterprise ops teams. Both situations are increasing and many organizations are blind to the risks.

According to a recent Cyberhaven report:
— AI is Accelerating: Corporate data input into AI tools surged by 485%
— Increased Data Risks: Sensitive data submission jumped 156%, led by customer support data
— Threats are Hidden: Majority of AI use on personal accounts lacks enterprise safeguards
— Security Vulnerabilities: Increased risk of data breaches and exposure through AI tool use.

The risks are real and the problem is growing.

Now is the time to get ahead of this problem.
1. Establish policies for use and development/deployment
2. Define and communicate an AI Ethics posture
3. Incorporate cybersecurity/privacy/compliance teams early into such programs
4. Drive awareness and compliance by including these AI topics in the employee/vendor training

Overall, the goal is to build awareness and collaboration. Leveraging AI can bring tremendous benefits, but should be done in a controlled way that aligns with enterprise oversight requirements.

“Do what is great, while it is small” — A little effort now can help avoid serious mishaps in the future!

--

--

Matthew.Rosenquist
Matthew.Rosenquist

Written by Matthew.Rosenquist

CISO and cybersecurity Strategist specializing in the evolution of threats, opportunities, and risks in pursuit of optimal security

No responses yet