Security AI Statistics for 2025
How are organizations using AI and automation in their security operations centers (SOCs) in 2024?
We reviewed thousands of cybersecurity statistics from 2024 to find out.
Long story short: Most companies appear to be integrating AI across key security functions like prevention, detection, investigation, and response. The benefits of doing so are massive (e.g., improved threat identification, reduced operational workloads, and accelerated breach containment). But the risks of AI are not to be underestimated, either.
AI in the SOC
Organizations are deploying AI in their SOCs
- 2 out of 3 organizations deploy security AI and automation across their security operations center, a 10% jump from the year before. (IBM)
- 27% of organizations use AI and automation across 4 security categories: prevention, detection, investigation and response. (IBM)
- Among organizations that stated they use AI and automation extensively, 27% used it for prevention, investigation and response, and 24% used it for detection. (IBM)
- 44% of security executives cite AI as among their three main initiatives in 2024, surpassing cloud security at 35% and security analytics at 20%. (Splunk)
- Leading organizations are more likely to innovate with AI, with 48% declaring it as a top initiative, compared to 30% of their less mature peers. (Splunk)
... with security tools are also increasingly utilizing AI
- 100% of enterprises reported an improvement across their security stack since deploying AI. (Netacea)
- 100% of enterprises have incorporated AI within their security stack to some degree. (Netacea)
- 62% of enterprises said their DDoS protection solution utilises AI. (Netacea)
- 53% of enterprises said their WAF solutions utilize AI. (Netacea)
- 43% of enterprises said their API security solutions utilize AI. (Netacea)
- 33% of enterprises said their bot management solutions utilize AI. (Netacea)
- 90% of CISOs said they are confident in the defensive AI capabilities of perimeter defenses such as WAF, DDoS and API security. (Netacea)
- 73% of security executives say tools with traditional AI and ML capabilities can generate false positives, and 91% say they require tuning. (Splunk)
Many think AI will be able to help their security in some way
- Among the technical and business professionals surveyed 53% believe they can use AI for monitoring network traffic and detecting malware, 50% for analyzing user behavior patterns, 48% for automating response to cybersecurity incidents, 45% for generating tests of cybersecurity defenses, 45% for automating configuration of cybersecurity infrastructure, and 45% for predicting areas where future breaches may occur. (CompTIA)
- For security executives, the top generative AI cybersecurity use cases are identifying risk (39%), threat intelligence analysis (39%), threat detection/prioritization (35%), and summarizing security data (34%). (Splunk)
- 87% of global CISOs are looking to deploy AI-powered capabilities to help protect their organizations against human error and advanced human-centered cyber threats. (Proofpoint)
... including in hiring and productivity
- 86% of organizations believe generative AI will help them hire more entry-level cybersecurity talent, and 58% say it would help onboard entry-level talent faster. (Splunk)
- 90% of security executives say that entry-level staff can lean on generative AI to help develop their skill in the SOC once they’re hired. (Splunk)
- 65% of security executives believe that generative AI will allow seasoned security pros to be more productive. (Splunk)
- 49% of security executives say generative AI will eliminate some existing security roles. (Splunk)
- 53% of leading organizations are using AI and machine learning to fill hiring gaps (compared to only 28% of developing organizations). (Splunk)
Yet it can’t help close the security skills gap
- Despite 1 in 5 organizations using generative AI security tools to boost productivity and efficiency, the skill gap remains a persistent challenge. (IBM)
AI training is also lacking
- 55% of AI users reported they had not received any training on security and privacy risks associated with these tools. (CybSafe)
- More than half of employed people (52%) had not received training on safe AI use. (CybSafe)
- 38% of people admitted to sharing sensitive work information with AI without their employer’s knowledge, and this was more prominent among younger generations (46% of Gen Z, 43% of Millennials). (CybSafe)
AI benefits
Overall, AI adoption seems to have been a mostly positive experience
- Nearly all surveyed SOC practitioners (97%) have adopted AI tools and 85% say their level of investment and use of AI has increased in the last year, which has had a positive impact on their ability to identify and deal with threats. (Vectra)
- 75% of SOC practitioners say AI has reduced their workload in the past 12 months. (Vectra)
- 67% of SOC practitioners say AI has had a positive impact on their ability to identify and deal with threats. (Vectra)
- 73% of SOC practitioners say AI has reduced feelings of burnout in the past 12 months. (Vectra)
- 75% of SOC practitioners say AI has reduced the number of tools they use for threat detection and response. (Vectra)
- 73% of enterprises said their web application and API protection posture improved significantly and 27% said it improved slightly since deploying AI solutions. (Netacea)
AI has improved MTTI and MTTC
- Extensive use of AI and automation in any security function—prevention, detection, investigation or response—reduced the average MTTI and MTTC for data breaches by 33% for response and 43% for prevention. (IBM)
- Organizations extensively using security AI and automation identified and contained data breaches nearly 100 days faster on average than organizations that didn’t use these technologies. (IBM)
- For organizations with extensive use of AI and automation in prevention, the mean time to identify (MTTI) was 153 days, and the mean time to contain (MTTC) was 48 days, totalling 201 days. In contrast, organizations with no AI usage had an MTTI of 230 days and an MTTC of 82 days, adding up to 312 days. (IBM)
- For organizations with extensive use of AI and automation in detection, the mean time to identify (MTTI) was 155 days, and the mean time to contain (MTTC) was 49 days, totalling 204 days. In contrast, organizations with no AI usage had an MTTI of 227 days and an MTTC of 81 days, adding up to 308 days. (IBM)
- For organizations with extensive use of AI and automation in response, the mean time to identify (MTTI) was 164 days, and the mean time to contain (MTTC) was 54 days, totalling 218 days. In contrast, organizations with no AI usage had an MTTI of 230 days and an MTTC of 74 days, adding up to 304 days. (IBM)
- For organizations with extensive use of AI and automation in investigation, the mean time to identify (MTTI) was 158 days, and the mean time to contain (MTTC) was 53 days, totalling 211 days. In contrast, organizations with no AI usage had an MTTI of 224 days and an MTTC of 77 days, adding up to 301 days. (IBM)
... as well as reduced breach and operational costs
- Organizations that extensively used AI and automation for detection experienced an average data breach cost of $3.82 million, significantly lower than the $5.70 million incurred by those that did not use these technologies. (IBM)
- Organizations that extensively used AI and automation for investigation experienced an average data breach cost of $3.85 million, significantly lower than the $5.59 million incurred by those that did not use these technologies. (IBM)
- Organizations that extensively used AI and automation for response experienced an average data breach cost of $3.93 million, significantly lower than the $5.61 million incurred by those that did not use these technologies. (IBM)
- Organizations that extensively used AI and automation for prevention experienced an average data breach cost of $3.76 million, significantly lower than the $5.98 million incurred by those that did not use these technologies. (IBM)
- Applying security AI and automation tools like attack surface management, red teaming and posture management, lowers breach costs in some instances by an average of $2.2 million. (IBM)
- 61% of security leaders stated that AI has reduced their operational overheads. (Netacea)
- AI and machine learning driven insights reduced the average breach cost by $258,538. (IBM)
- Generative AI security tools reduced the average breach cost by $167,430. (IBM)
All in all, organizations have a positive outlook when it comes to AI
- At least 6 in 10 security leaders believe AI will be a “game changer” across all security functions. (KPMG)
- 8.9% of cyber executives believe generative AI will provide overall cyber advantage to defenders in the next two years. (World Economic Forum)
- 46% of IT and security professionals said generative AI is a net positive for security. (Ivanti)
So it's not surprising that AI is a line item in many organizations' budgets
- 46% of IT and security professionals say they will increase investments in generative cybersecurity AI in 2024. (Ivanti)
- 24% of security leaders say difficulty in demonstrating the value/ROI of AI solutions – no strong use cases is a concern about AI-based Automation for SOC. (KPMG)
- 85% of SOC practitioners say their level of investment and use of AI has increased in the last year. (Vectra)
AI concerns
But there are concerns over AI, too
- 37% of business and IT professionals believe generative AI is a valid cause for cybersecurity concern. (CompTIA)
- 54% of CISOs believe generative AI poses a risk to their organization. (Proofpoint)
- 38% of security leaders say trusting that AI recommendations are accurate, reliable and explainable is a concern when it comes to AI-based automation for their SOC. (KPMG)
- 8% of cyber executives are worried about the technical security of the AI systems themselves. (World Economic Forum)
- 37% of business and IT professionals believe generative AI is a valid cause for cybersecurity concern. (CompTIA)
- 54% of CISOs believe generative AI poses a risk to their organization. (Proofpoint)
- 20% of cyber executives are worried about generative AI's impact in data leaks – exposure of personally identifiable information. (World Economic Forum)
- 9% of cyber executives are worried about generative AI's impact in increasing the complexity of security governance. (World Economic Forum)
- 8% of cyber executives have legal concerns of intellectual property and liability when leveraging generative AI. (World Economic Forum)
- 6% of IT and security professionals said generative AI is net negative for security. (Ivanti)
- 26% of IT and security professionals said insider threats will become more dangerous to generative AI. (Ivanti)
- 77% of security executives agree that more data leakage will accompany increased use of generative AI. However, only 49% are actively prioritizing data leakage prevention. (Splunk)
- 65% of security executives admit they don’t fully understand generative AI. (Splunk)
Security pros worry AI will expand their attack surface
- 77% of security executives say generative AI expands the attack surface to a concerning degree. (Splunk)
- 38% of IT and security professionals said software vulnerabilities will become more dangerous to generative AI. (Ivanti).
- 26% of IT and security professionals said misconfiguration will become more dangerous to generative AI. (Ivanti)
- 27% of IT and security professionals said poor encryption will become more dangerous to generative AI. (Ivanti)
- 8% of cyber executives are worried about generative AI's impact in software supply-chain and code development risk – potential backdoors. (World Economic Forum)
- 34% of IT and security professionals said API-related vulnerabilities will become more dangerous to generative AI. (Ivanti)
- Just 42% of IT and security professionals have audited third-party vendors for risks related to generative AI. (Ivanti)
Organizations expect an increase in AI-powered attacks
- Security executives identify AI-powered attacks as the most concerning (36%), followed by cyber extortion (24%), data breaches (23%), ransomware (21%), and system compromise (21%). (Splunk)
- 93% of businesses believe they will face daily AI attacks in the next year. (Netacea)
- 65% of enterprises believe that offensive AI will become the norm. (GetApp)
- 55.9% of cyber executives believe generative AI will provide overall cyber advantage to attackers in the next two years. (World Economic Forum)
- 46% of cyber executives are worried about generative AI's impact in advancing adversarial capabilities – phishing, malware development, deep fakes. (World Economic Forum)
- 28% of security executives worry that generative AI will help adversaries increase the volume of existing attacks. (Splunk)
- Security executives believe the top uses of generative AI by threat actors include making existing attacks more effective (32%), increasing the volume of existing attacks (28%), creating new types of attacks (23%), and reconnaissance (17%). (Splunk)
- 55.9% of cyber executives believe generative AI will provide overall cyber advantage to attackers in the next two years. (World Economic Forum)
... including phishing attacks
- 65% of IT professionals say AI-enhanced phishing attacks are the top AI-powered threat for U.K. businesses in 2025. (GetApp)
- 45% of IT and security professionals said phishing will become more dangerous due to generative AI (Ivanti).
... as well as DDoS attacks
- 31% of IT and security professionals said DDoS attacks will become more dangerous to generative AI. (Ivanti).
... and ransomware
- 37% of IT and security professionals said ransomware attacks will become more dangerous to generative AI. (Ivanti).
People also don’t have much trust in organizations' AI implementation
- 36% of people express high trust around companies implementing AI and 35% expressing low trust. The remaining 29% are on the fence, expressing a neutral stance. (CybSafe)
- The Silent Generation showed the lowest trust in companies' AI use, with only 56% expressing confidence, followed closely by Baby Boomers at 53%. Millennials (53%) and Gen Z (50%) displayed slightly higher trust levels. Gen X, sitting in the middle both in age and opinion, was the most neutral, with 32% expressing no strong stance. (CybSafe)
AI use
Still, security AI use is growing
- The number of organizations that used security AI and automation extensively grew to 31% in 2024 from 28% in 2023. (IBM)
- The share of organizations using AI and automation on a limited basis also grew from 33% to 36%. (IBM)
- The number of organizations not using AI and automation at all dropped by 6% from 39% to 33%. (IBM)
- 36% of organizations say they have not worked with AI/ML before but that they are now seriously exploring generative AI tools. In fact, only 9% of organizations surveyed are not exploring generative AI. (CompTIA)
- 93% of security executives say they've already started using public generative AI across the business and 91% have adopted it within their security teams. (Splunk)
AI use seems to depend on where an organization is at in its maturity level
- 75% of leading organizations say that most security team members were using generative AI, and only 23% of developing organizations say the same. (Splunk)
- 82% of leaders have established generative AI security policies, while only 46% of developing organizations have done so. (Splunk)
- 55% of leaders have a formal plan to use generative AI for cybersecurity use cases, while only 15% of developing organizations make this claim. (Splunk)
AI by industry
Perceptions of AI differ by industry
- 68% of CISOs in education believe generative AI is a security risk to their organization. (Proofpoint)
- 33% of security leaders in education think generative AI will most significantly affect cybersecurity in the next two years and 67% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 66% of CISOs in healthcare believe generative AI is a security risk to their organization. (Proofpoint)
- 46% of security leaders in health and healthcare and life sciences think generative AI will most significantly affect cybersecurity in the next two years and 62% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 65% of CISOs in the public sector believe generative AI is a security risk to their organization. (Proofpoint)
- 62% of CISOs in business and professional services believe generative AI is a security risk to their organization. (Proofpoint)
- 53% of security leaders in professional services think generative AI will most significantly affect cybersecurity in the next two years and 69% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 61% of CISOs in media, leisure and entertainment believe generative AI is a security risk to their organization. (Proofpoint)
- 58% of CISOs in financial services believe generative AI is a security risk to their organization. (Proofpoint)
- 56% of security leaders in banking and capital markets think generative AI will most significantly affect cybersecurity in the next two years and 68% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 56% of security leaders in insurance and asset management think generative AI will most significantly affect cybersecurity in the next two years and 89% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 41% of CISOs in retail believe generative AI is a security risk to their organization. (Proofpoint)
- 44% of security leaders in retail, consumer goods and lifestyle think generative AI will most significantly affect cybersecurity in the next two years and 67% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 37% of CISOs in energy, oil/gas and utilities believe generative AI is a security risk to their organization. (Proofpoint)
- 41% of security leaders in energy technology, energy utilities and oil and gas think generative AI will most significantly affect cybersecurity in the next two years and 94% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 54% of CISOs in IT, technology and telecoms believe generative AI is a security risk to their organization. (Proofpoint)
- 52% of security leaders in information technology and telecommunications think generative AI will most significantly affect cybersecurity in the next two years and 81% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 49% of CISOs in manufacturing and production believe generative AI is a security risk to their organization. (Proofpoint)
- 42% of CISOs in transport believe generative AI is a security risk to their organization. (Proofpoint)
- 63% of security leaders in agriculture, food and beverage think generative AI will most significantly affect cybersecurity in the next two years and 38% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 40% of security leaders in policy and administration think generative AI will most significantly affect cybersecurity in the next two years and 60% think their organizations are at least minimally cyber resilient. (World Economic Forum)
- 15% of security leaders in software and platforms think generative AI will most significantly affect cybersecurity in the next two years and 17% think their organizations are at least minimally cyber resilient. (World Economic Forum)