Kroll Finds 76 Percent of Firms Faced AI Security Incidents
Kroll announced in a press release that 76 percent of organizations have experienced a security incident involving artificial intelligence applications or models within the past two years. The study found that 27 percent of these organizations reported costs exceeding one million dollars from AI-related incidents.
The research shows that as cyber maturity increases, the likelihood of AI-related incidents decreases from 89 percent among organizations with very low maturity to 54 percent among those with very high maturity. Nearly half of high-maturity organizations reported no AI-related incidents during the same period.
Kroll’s analysis indicates that many organizations are adopting AI faster than they are implementing governance and security frameworks. Companies with mature security practices are six times more likely to allocate over 20 percent of their AI budgets to testing security controls. In contrast, 48 percent of respondents said their organizations have little or no governance over AI tool adoption, expanding the potential attack surface.
The survey, conducted by Sapio Research in late 2025, included 1,000 cybersecurity decision makers across ten countries. It highlights that 90 percent of respondents identified barriers to increasing investment in AI security, including unclear return on investment and limited executive understanding of AI risks.
We hope you enjoyed this article.
Consider subscribing to one of our newsletters like Cybersecurity AI Weekly, AI Policy Brief or Daily AI Brief.
Also, consider following us on social media:
More from: Cybersecurity
More from: AI Safety
Subscribe to Cybersecurity AI Weekly
Weekly newsletter about AI in Cybersecurity.
Market report
2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk
The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.
Read more