BlogNewsResourcesWebinarsGlossary

The AI Revolution No One Saw Coming Until It Was Too Late

Jan 20, 2025

Jan 20, 2025

blue polygon icon

What started with the ChatGPT craze has evolved into a relentless wave of AI-powered features embedded across SaaS applications. But while AI adoption soars, governance lags woefully behind.

Link to Linkedin
Link to Linkedin
Link to Linkedin
Sarah W. Frazier
The AI Revolution No One Saw Coming Until It Was Too Late
This webinar will cover:

Remember when we used ChatGPT to create cybersecurity haiku, dad jokes, and other whimsical tasks? Those days feel long gone; in just two years, generative AI has exploded from a novelty to a business imperative.  

What started with the ChatGPT craze is now a relentless wave of AI-powered features embedded across SaaS applications, transforming how work gets done. Gartner projects that up to 80% of software applications will integrate AI in future releases—an astonishing leap from just 5% today. GenAI isn’t just here; it’s everywhere, shaping business operations faster than many organizations can adapt.

But here’s the problem: while adoption soars, governance lags woefully behind. The Conference Board reports that 57% of organizations lack an AI policy—a huge oversight in an era when employees are embracing GenAI tools with or without IT’s blessing. And the risks are more than theoretical, too. According to Oliver Wyman Forum,  84% of employees using AI openly admit to exposing company data through these tools. That’s not just shadow AI nightmare; it’s a data security crisis waiting to happen.

For cybersecurity, the AI revolution isn’t a story about innovation; it’s a story about control—or the lack thereof. While AI increases productivity, speed, and staff capabilities, it also introduces a new attack vector, exposing companies to breaches, IP theft, and regulatory violations if not managed sufficiently. If your AI policies and SaaS governance aren’t evolving as quickly as your workforce’s adoption of these tools, you’re not just falling behind—you’re putting your entire organization at risk. It’s time to stop debating if AI is a legitimate business driver and start asking whether your business is ready to drive AI usage responsibly.

Accelerated Adoption: Tools and Users

In Q3 2024, Grip analyzed anonymized data from 29 million SaaS user accounts, 1.7 million identities, and 23,987 distinct SaaS applications. The findings were unmistakable: employee usage of AI applications isn’t a temporary trend—it’s a monumental shift. AI tools are becoming ingrained in daily operations, with employees actively engaging with these tools and, in many cases, sharing sensitive data through them unknowingly.  

Some of these tools don’t require payment in the traditional sense but come with an invisible price tag: your data. Employees are enticed by a free tool that looks helpful, but by using these applications, they may unknowingly provide sensitive information to train the LLM model, creating potential security and privacy vulnerabilities.

The more accessible AI tools become and the more they get integrated into workflows, the more they amplify the challenges of managing sensitive data, staying compliant, and keeping usage, identities, and access under control.  To ignore these risks is to invite hackers in.

This chart captures the explosive growth of standalone AI tools, excluding embedded AI features in major SaaS platforms like Microsoft, Adobe, Atlassian, and Google. In the third quarter of 2022—around the launch of ChatGPT—there were only 1,876 users of AI-specific tools. Just three months later, that number had more than doubled to 4,641, and by the first quarter of 2024, it had skyrocketed to 44,440.

The primary challenge with this growth? Smaller, niche AI apps are slipping through the cracks, undetected and unmanaged by legacy security tools. Each unmanaged shadow app adds to an organization’s expanding attack surface, introducing risks that most companies are not equipped to address. Houston, we have a problem.

The Most Widely Used AI Tools  

When it comes to AI tools, the most widely used aren’t standalone apps but features embedded within corporate-sanctioned platforms, often unnoticed and not recognized as "AI tools." Major players like Microsoft, Google, Zoom, and Adobe dominate, with provisioned usage found in 97–100% of organizations analyzed.

Beyond these giants, other AI-powered tools are also making their mark. GitHub and Canva are provisioned in 83% of organizations, followed closely by Grammarly (82%), Notion (73%), and Jasper (39%). These adoption rates demonstrate just how seamlessly AI is weaving itself into workplace ecosystems, often without the explicit awareness of the teams tasked with governing them.

ChatGPT  

ChatGPT has faced heavy criticism for security concerns, from data privacy vulnerabilities to potential leaks, prompting bans in high-profile public organizations and government agencies. Yet, despite these red flags, it was found in 96% of the organizations analyzed.

Since its launch in December 2022, ChatGPT usage has exploded, growing 24x in less than two years. Even more surprising? Despite its potential for tighter control, ChatGPT is managed at a slightly lower rate (9%) than the average SaaS application (13%). In other words, employees are using AI tools like ChatGPT faster than organizations can secure them, widening the gaps in SaaS governance and risk management.

Niche AI Apps are Hot but Remain in the Shadows  

Grip’s research also highlights a stark divide: large, corporate SaaS applications with AI features—like Salesforce, Zoom, and Microsoft—are managed at higher rates, having gone through traditional procurement and security reviews. In contrast, smaller, niche apps like Canva and Grammarly, often fly under the radar, operating with minimal oversight despite their widespread usage.

Even more concerning is what Grip’s data reveals about the management of AI apps: 42% of popular AI tools have SAML capabilities, yet 80% of those that could be centrally managed through SAML are not. This gap suggests a troubling lack of visibility or prioritization, where security teams either aren’t aware of the apps’ AI capabilities or perceive them as low risk, tolerating them without proper controls.

The real challenge lies in the 58% of AI applications that don’t support SAML at all. These tools, often built for smaller teams or individual users, bypass identity management systems entirely. As employees take advantage of these tools to boost productivity, they also unintentionally expand their organization’s attack surface, introducing blind spots into the company’s identity security programs.

Shadow AI isn’t just about visibility—it’s about vulnerability. Every unmanaged AI app is a potential entry point for threat actors. Without proper governance, organizations risk trading short-term efficiency gains for long-term security headaches. The question isn’t whether these tools should be adopted, but whether they can be secured before the risks outweigh the rewards.

AI App Management: "Major" SaaS apps are managed 27% of the time, while niche app management rates drop to 9%.

The Future of AI Governance Starts Today

The rapid adoption of AI tools has catapulted businesses into a new era of innovation—but it’s also ushered in a new wave of security challenges. The data is clear: employees are embracing AI faster than organizations can govern it, leaving critical gaps in visibility, control, and compliance. Additionally, the proliferation of niche apps and the lack of centralized management and identity security measures have created a perfect storm for threat actors to exploit.

Organizations can’t afford to wait; the time for action is now. Every unmanaged AI app, every unsecured interaction, and every abandoned shadow AI tool, is an open invitation to risk. To navigate the fast-growing AI landscape, companies must rethink their approach to governance, building strategies that extend the reach of traditional tools and encompass the complexities of AI-driven ecosystems. Security teams have an opportunity—not just to protect their organizations, but to lead the charge in defining how AI can be used safely, effectively, and responsibly.

AI isn’t just a tool; it’s a revolution. But without strong governance, it risks becoming a liability instead of an asset. The speed of AI adoption may have caught organizations off guard, but the future belongs to those who can embrace AI while mastering the art of securing it.

This article is an excerpt from the 2025 SaaS Security Risks Report. Access all the findings now.

Additional Resources

How Believer Mitigated Shadow AI Risks While Supporting Employee Innovation

SaaS Security in 2025: Building Strategies, Not Barriers

Shadow SaaS Assessment - meet with the Grip team to see what’s lurking in your SaaS environment.

In this webinar:
See More
See more
Fill out the form and watch webinar
Oops! Something went wrong while submitting the form.
Register now and save your seat!
Registration successful!
Webinar link will be sent to your email soon
Oops! Something went wrong while submitting the form.
In this webinar:
See More
See more

See Grip, the leading SaaS discovery tool, live.

Gain a complete view of your SaaS usage—including shadow SaaS and rogue cloud accounts—from an identity-centric viewpoint. See how Grip can improve the security of your enterprise.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.