
Nearly 1.5 Million Known CSAM Files Identified (Image Credits: Unsplash)
Thorn, a nonprofit dedicated to child safety technology, released its Safer Impact Report 2026 on March 30, highlighting substantial advancements in online protection. The report detailed how the Safer tool, designed to detect child sexual abuse material and exploitation, scaled operations dramatically last year. Companies across platforms adopted the technology more widely than ever, processing unprecedented volumes of user-generated content and AI outputs.[1]
Nearly 1.5 Million Known CSAM Files Identified
Safer’s performance in 2025 stood out with staggering numbers that underscored its effectiveness. The tool analyzed 415.4 billion files, flagging nearly 1.5 million images and videos as known child sexual abuse material. Additionally, it pinpointed more than 3.84 million files as potential novel CSAM, content not previously documented in databases.[1]
These detections relied on a robust hash database, where digital fingerprints matched uploads against verified harmful content. Thorn expanded this database significantly, adding 1.6 million new image hashes to reach 6.3 million total and 50 million video hashes for a cumulative 64 million. Such growth enabled faster, more accurate identifications at scale.
Text-Based Exploitation Detection Gains Momentum
Beyond images and videos, Safer extended its reach to textual content. In 2025, the system reviewed over 318.59 million lines of text, identifying more than 1.3 million instances of potential child exploitation. This capability, introduced with a text classifier in 2024, targeted harms like sextortion prevalent in chats and messages.[1]
Predictive artificial intelligence powered these classifications, distinguishing risky language from benign interactions. Platforms used these insights to intervene swiftly, often reporting findings to authorities such as the National Center for Missing & Exploited Children. The approach proved vital as online grooming tactics evolved.
Strategic Partnerships Amplify Impact
Thorn fostered key collaborations to broaden Safer’s deployment. A notable partnership with Hive integrated the tool into their services, resulting in the analysis of nearly 4.3 billion files for CSAM and 11,674 text lines for exploitation in 2025 alone. Over 80 platforms now formed Safer’s community, sharing resources and hashes.[1]
Since its inception in 2019, Safer processed 658.6 billion files and 334 million text lines, yielding detections of 12.4 million potential CSAM files and nearly 1.4 million exploitation cases. These alliances created a networked defense, where one platform’s discoveries benefited all. For full details, see the original report.
Technology at the Core of Prevention
Safer combined hash matching with AI-driven prediction to handle vast data volumes. Known CSAM hashes, verified by partners, provided immediate matches, while novel content classifiers flagged items for human review and database inclusion. This multi-layered system saved moderators time and reduced oversight risks.
- Hash databases: SaferHash for images, SSVH for videos.
- AI classifiers: Detect potential novel CSAM and text harms.
- Reporting integration: Direct submissions to investigative agencies.
- Scalability: Handles billions of files monthly across platforms.
Trust and safety teams, often stretched thin, benefited from automation that prioritized high-risk content. Yet challenges persisted, including resource shortages amid rising platform demands.
Key Takeaways:
- Safer processed 415.4 billion files in 2025, a tripling of prior cumulative totals.
- Detections included 1.5 million known CSAM and 3.84 million novel cases.
- Text analysis flagged 1.3 million exploitation risks from 318 million lines.
Thorn emphasized an “all-hands” effort to counter online harms, blending technology with human oversight. The 2025 results demonstrated how shared tools fostered accountability across the tech ecosystem. As digital spaces expand, sustained investment in such innovations remains essential for child protection.
These milestones signal progress, yet the fight continues. Platforms must prioritize safety amid generative AI growth and evolving threats. What steps can tech companies take next to build on this momentum? Share your thoughts in the comments.

