Chad Hetherington

When you hear the term “shadow AI,” what do you think of? An omniscient AI so powerful it lurks in every dark corner and crevice of the world and your mind? Thankfully, that’s not it. The real answer is far less daunting, but still a little worrisome for business owners.

In short, “shadow AI” simply means using AI tools in secret at work, particularly if an organization hasn’t greenlit that particular tool or sanctioned AI use at all. In this blog, we’ll explore the concept of shadow AI and its potential risks to help bring it out of obscurity.

Unshrouding Shadow AI: What It Is?

Shadow AI digs a bit deeper than just using AI tools at work without anyone in the organization knowing. Specifically, it’s about using these tools without approval or oversight from the IT department. One step further, it even covers full-on deploying an advanced AI tool across an organization without IT approval.

How Does Shadow AI Happen?

Shadow AI can be an intentional action or accident. If someone wants to enhance their productivity or is feeling burnt out and needs support, they may reach for one of the many easily accessible AI tools to alleviate some pressure without going to IT first.

Alternatively, employees may already be using software that IT has previously approved when suddenly it gets an update that adds an AI-enabled feature. In some cases, the AI feature may automatically be toggled on when the update is complete, or users will have to do it manually — either way, it falls under shadow AI.

Why Does Shadow AI Happen?

Shadow AI could happen for many reasons and, from a business perspective, even a lot of good reasons.

According to a recent survey about generative AI, in particular, senior executives regard the following as the top five benefits of using this technology:

  • Improve productivity and efficiency (35%).
  • Gain a competitive advantage (30%).
  • Improve customer engagement and satisfaction (30%).
  • Speed up revenue growth and increase market share (30%).
  • Enhance decision-making and insights (28%).

These are all incredibly valuable aspects of Gen AI that most business-minded folks would want to get going ASAP, which means they may reach for ad hoc solutions without consulting IT. At the employee level, the benefits would be more personal. Research from Harvard Business Review suggests that “employees can only tolerate so much toil in their day-to-day roles” — i.e., work they don’t enjoy doing. If that amounts to more than 4 hours per week, they explore the possibility of leaving their job.

Naturally, in this Gen AI era, reaching for these tools can help employees tolerate the stuff they don’t like about their jobs. If burnout is part of the equation, it’s understandable how employees may not even think about consulting IT or their managers to use a free tool that promises to help alleviate some pressure.

The Risks of Using Unapproved AI Tools at Work

Using simple and easily accessible AI tools — like ChatGPT, for example — at work may seem innocuous. It’s web-based, so you don’t have to download anything. Just sign up with an email address and you’re well on your way to generating text.

Unfortunately, it just isn’t that simple if you’re using employer-provided hardware or are on an employer-provided network.

Data Security and Privacy

Many AI tools collect and analyze data; that’s how they’re able to work the way they do. Consequently, using these tools without the proper authorization could mean unknowingly handing over sensitive or proprietary data to third-party platforms.

Afterward, that data or information could make its way out into other generated responses from other users who are not a part of your organization. It also means that data lives somewhere else other than in your workplace’s secure storage, which makes it even more susceptible to data breaches and theft.

Compliance Violations

Depending on where you are in the world or what you do for work, unknowingly handing over data via shadow AI use could lead to data protection law violations. At the very least, it may violate internal organizational policies that put you and your organization at risk.

Vendor Lock-In

Vendor lock-in is less discussed regarding shadow AI, but it can create a real kink in an organization. Oftentimes, employees choose AI tools because they’re easy to use or deploy and offer unique functionality; however, that’s not always a good thing. Here’s why:

  • If they use proprietary data formats, it makes it difficult to export or integrate data into approved tools and systems later on.
  • If they operate in a closed ecosystem, it often means they require ongoing use to maintain or even improve functionality.

Essentially, relying too heavily on an unapproved tool — especially if it’s more than one employee or an entire department — makes transitioning away more challenging. Approved systems or enterprise-grade AI solutions may not support certain tools’ proprietary formats, making migration more time-consuming and expensive.

Internal Conflict Between Teams

Adopting unsanctioned AI tools can create friction between departments. It’s likely that as soon as IT discovers unofficial initiatives or tools employees are using — and may even rely on depending on how long it’s been — they’ll shut them down. This can create animosity, especially if employees don’t understand the potential risks.

How To Pull Shadow AI Into the Light

Shadow AI can create issues at nearly every level of an organization. That said, each employee — whether an entry- or senior-level worker, head of marketing or IT personnel — can play a role in mitigation.

For Employees

While the implications of using an unapproved AI tool aren’t always obvious, think twice before you add a new one to your roster. Consider this: If a particular tool hasn’t been approved, your organization decidedly went with a different tool or there’s something of a ‘no AI policy’ in place altogether, there’s likely a decent reason.

That doesn’t mean you can’t ask questions, though. If there’s an AI tool you’re interested in using, ask about it (and make sure both your manager and IT are involved in the conversation). The worst they can say is “no,” right? In a best-case scenario, your manager sees the tool’s value and IT determines it won’t cause any problems. Then, you can talk about deploying it for all to use or, if it’s free, get the green light to explore it on your own.

For Department Heads

If you’re the head of your department and see potential value in a new AI tool, build a use case for it to present to your higher-ups and IT. Maybe that means throwing together a slide deck or showing everyone a demo. At any rate, your case should show the potential positive impact the AI model could have on your department, i,e., saved time or money, increased productivity, reduced stress on your staff or better departmental organization for enhanced efficiency.

For IT Personnel

I’m not in IT, but I imagine the headaches shadow AI has caused to IT staff are immense. There are hundreds of accessible AI tools out there today; many of which are free, seemingly innocent and offer promising results. It’s all too easy for intrigued employees to see what’s out there, sign up and start using these tools without much standing in their way.

Because of that, it’s important to restrict access to URLs of any AI tool — especially the web-based free ones — that you deem compromising to organizational data or inappropriate for employee use. In an office, setting restrictions and limitations for various addresses and downloadables isn’t a new practice, but one that’s increasingly necessary because of AI.

Final Thoughts

From one 21st-century employee to another, I understand the urge to reach for myriad AI tools promising to make my professional life less stressful. That said, there are two sides to every coin, and just because a tool seems like it could be a good fit — or even if it objectively is a good fit — doesn’t always mean it’s in the best interest of the organization to deploy it.

Data breaches and cybercrime put organizations and, sometimes, employees at risk of all manner of things, like fraud.

So, I’ll leave you with this: Don’t be afraid to ask about using an AI tool. It’s increasingly normal to reach for them these days to streamline workflows and get more organized. Build a strong use case for the tool you want to use, and involve managers and IT in the conversation. If they see the value, great! If they don’t or just flat-out decline, ask them why and you’ll be all the more wise.