Sensitive Data
We've curated 44 cybersecurity statistics about Sensitive data to help you understand how protecting personal information, financial records, and health data is evolving in 2025. Discover the latest trends in encryption, data breaches, and compliance challenges!
Related Topics
Showing 1-20 of 44 results
81% of CIOs are concerned that citizen-built AI could expose sensitive company data.
84% of government IT security leaders agree that sharing sensitive data across networks heightens their cyber risk.
Detected sensitive-data events are led by secrets and credentials (47.9%), followed by financial information (36.3%) and health-related data (15.8%).
39.7% of all data movements into AI tools involve sensitive data, including prompts or copy-paste actions.
44% of U.S.-based cybersecurity decision-makers ranked protecting sensitive data among their top two cybersecurity priorities for 2026.
15% of all sensitive data uploaded to generative AI tools involves personal or employee data, including identifiers such as names and addresses.
25% of all sensitive data disclosures involve technical data, with 65% of that consisting of proprietary source code copied into generative AI tools.
12% of all sensitive data exposures originate from personal accounts, including free versions of generative AI tools.
57% of sensitive data uploaded to generative AI tools is classified as business or legal data, with 35% of that involving contract or policy drafting.
26.4% of all file uploads to generative AI tools contained sensitive data between July and September 2025, an increase from 22% in Q2 2025.
Only 15% of organizations feel fully prepared to handle the movement of sensitive data through SaaS and Shadow IT tools.
49% of organizations agree, and 23% strongly agree, that they lack visibility into how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.
72% of organizations lack visibility into how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.
96% of healthcare organizations researched had at least two data loss or exfiltration incidents involving sensitive and confidential healthcare data in the past two years.
31% of respondents say sensitive data requests tops their lists of app concerns.
91% of organizations believe that sensitive data should be allowed in AI training.
The average small healthcare employee has access to more than 5,500 sensitive files.
28% of the workforce have admitted to using AI to access sensitive data.
57% of employees input sensitive data into free-tier AI tools.
Of these incidents involving Chinese GenAI tools, the exposed data types included: 32.8% involving source code, access credentials, or proprietary algorithms; 18.2% including M&A documents and investment models; 17.8% exposing PII such as customer or employee records; and 14.4% containing internal financial data.