In the last year, the race to implement Generative AI (GenAI) has moved from a sprint to a stampede. But according to a groundbreaking global study released by OpenText and the Ponemon Institute, most enterprises are charging ahead without a floor beneath them.
The report, which surveyed nearly 1,900 IT and security practitioners, delivers a sobering wake-up call: while 52% of enterprises have already deployed GenAI, a staggering 79% lack full “AI maturity” in their cybersecurity.
In other words: we are rushing into the future of productivity while leaving the door to the data center wide open.
Key Statistics at a Glance:
- 79%: Organizations lacking full AI cybersecurity maturity.
- 41%: Organizations with AI-specific privacy policies.
- 62%: Finding it difficult to minimize model bias.
- 45%: Citing errors in AI decision rules as a barrier to effectiveness.
The Maturity Gap: Innovation vs. Protection
The study defines “AI maturity” as the point where AI is fully integrated into cybersecurity activities and, crucially, where all associated security risks are assessed. Reaching this milestone is rare—only 1 in 5 organizations have hit the mark.
The risks aren’t just theoretical. As AI moves from simple chatbots to “Agentic AI”—systems that can make autonomous decisions—the surface area for attacks explodes.
3 Red Flags from the Ponemon Study
The report highlights three critical areas where security foundations are crumbling:
- The Privacy Policy Vacuum: Only 41% of organizations have implemented AI-specific data privacy policies. This is particularly alarming given that GenAI thrives on data ingestion. Without clear boundaries, sensitive corporate IP and customer data are at constant risk of being “leaked” into public models.
- The “Black Box” Problem: Nearly two-thirds (62%) of respondents say minimizing model and bias risks is extremely difficult. If you can’t explain why your AI made a decision, you can’t defend that decision to a regulator—or a customer.
- Prompt Injection & Input Risks: 58% of practitioners find it difficult to manage “input risks,” such as prompt injections that can trick an AI into revealing secrets or executing unauthorized commands.
Building the “Security Foundation”
So, how do enterprises bridge this gap? OpenText’s EVP of Product & Engineering, Muhi Majzoub, argues that security shouldn’t be a “bolt-on” feature but a foundational requirement. To scale AI safely, the report suggests a transition toward Secure Information Management.
To move toward maturity, organizations must implement:
- Risk-Based Governance: Currently, only 43% of companies have a strategy that addresses AI-specific threats like ethical bias and autonomous errors.
- Human-in-the-Loop Oversight: Despite the push for autonomy, 51% of experts agree that human oversight is still mandatory. Attackers are using AI to adapt faster than ever; a purely automated defense is a recipe for disaster.
- Continuous Monitoring: AI models “drift” over time. Continuous auditing of both the inputs (what you tell the AI) and the outputs (what it tells you) is the only way to ensure compliance and reliability.
Securing Your GenAI Projects
The OpenText report makes one thing clear: The leaders of the GenAI era won’t be the companies that deployed the fastest. They will be the companies that built with transparency and control from day one. As we move deeper into 2026, the question for your C-suite shouldn’t be “What can AI do for us?” but rather, “Is our security foundation strong enough to handle what AI might do to us?”.
Begin your enterprise’s GenAI journey with safe data injection today.
Source: This blog post explores the critical findings of the March 23, 2026, OpenText and Ponemon Institute report, titled “Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI.”




