Indecent Images on Social Media: Evidence, Encryption and the Law

Indecent Images and Social Media - Reeds Solicitors

The internet and social media have transformed how people communicate and share information, but they have also created complex challenges for the criminal justice system in cases involving alleged indecent images and online behaviour. In recent years, these online environments have become the focus of wide-ranging police investigations, often involving large data sweeps and automated reports that can draw in users with no deliberate involvement. Many individuals are now being investigated or charged, sometimes on the basis of automated data or mistaken attribution.

People are regularly accused of viewing or distributing indecent images on social-media platforms, including Kik, TikTok, Snapchat, and Telegram. It is worth noting that the prosecution must show more than the mere existence of material online – it must prove possession, or in the case of a distribution offence, deliberate sharing. The law also provides a defence where a person can demonstrate that they were unaware, and had no reason to suspect, that the images were indecent.

The scale of these investigations continues to grow. Recent Home Office figures show 38,685 recorded child sexual abuse image offences across England and Wales in 2023-24, an average of more than 100 per day, while the Internet Watch Foundation removed a record 291,270 webpages in 2024. Much of this material now originates from social-media platforms or cloud storage rather than traditional peer-to-peer networks, which helps explain the growing focus of law enforcement on mainstream apps such as TikTok, Snapchat, Kik and Meta, as well as emerging platforms like Yubo.

Indecent images on social media platforms

Governments worldwide are increasing pressure on social-media companies to identify illegal content and reduce the circulation of indecent images. Although TikTok states that it does not tolerate child sexual exploitation or abuse, a Global Witness investigation in 2025 found that its algorithm suggested sexually explicit search terms to users as young as 13, even on new accounts with no prior activity. Content moderators have also reported that reliance on automated moderation and AI systems often fails to detect local slang or coded references used to share prohibited material.

TikTok is not the only platform under scrutiny. A Freedom of Information request by the NSPCC showed that, of the 7,338 offences in 2024 where police recorded the platform used by perpetrators, 50% involved Snapchat and 25% involved Meta products (11% on Instagram, 7% on Facebook and 6% on WhatsApp). These figures highlight the scale of investigations now extending across mainstream social-media services, where the boundaries between private messaging, image storage, and public posting are often blurred.

These global developments mirror the direction of UK policy, where the Online Safety Act 2023 now places specific duties on online platforms to manage illegal content and cooperate with law enforcement.

Online Safety Act 2023: platform obligations

In March 2025 the first major set of duties under the Online Safety Act 2023 came into force, signalling a significant shift in how the UK regulates online platforms and their responsibility for illegal content. The Act introduces a duty of care on user-to-user and search services to take ‘reasonable steps’ to guard against illegal content (including child sexual abuse material). Platforms must strengthen their systems for detecting and reporting such content and take action when serious offences are alleged. As providers come under greater regulatory pressure to retain or disclose user data, the volume of digital material available to investigators is likely to increase. At the same time, ongoing debates about privacy and encryption mean that, in practice, evidential access will still vary widely between platforms.

Yet even with stronger regulatory duties, the technical realities of encryption and data storage continue to limit what investigators can actually retrieve.

Encryption and data access across major social-media apps

Major messaging and social apps differ significantly in how their encryption, data-retention, and law-enforcement access policies operate. WhatsApp uses end-to-end encryption by default, meaning that message content cannot be accessed by Meta even under lawful request, although limited metadata and account information can be disclosed.

Telegram, by contrast, only applies end-to-end encryption to “Secret Chats”; its regular cloud-based messages are stored on distributed servers, and the company has stated that it may share account identifiers such as phone numbers and IP addresses with law enforcement when presented with a valid court order.

Kik does not use end-to-end encryption. However, messages are not stored long-term, so past chat content is generally unrecoverable. Kik’s Law Enforcement Guide confirms that message content is deleted once delivered, and only “recently sent” messages that have not yet reached the recipient may be accessible at the time of a lawful request. In practice, these retention limits mean that, even without encryption, most historical chat material cannot be retrieved. Nevertheless, Kik may provide basic subscriber details and recent IP logs to the police in response to a lawful request.

Snapchat messages are not fully encrypted by default and are deleted once viewed or expired, though Snap can preserve and disclose limited metadata or unopened messages when ordered by the courts.

Signal is widely regarded as offering the strongest encryption and retains minimal user data. Yubo – a social-discovery and live-streaming app popular among younger users – provides limited public information about how long content is stored, but it operates a dedicated law enforcement cooperation channel through which verified agencies can request account or connection data when allowed by law.

In practice, this means that even when encryption prevents providers from revealing message content, the police can still obtain evidence through other sources such as device examinations, backups, or cloud storage. Where platforms store more server-side data (as with Telegram or Snapchat), disclosure may be possible but depends on jurisdiction and the formal legal process, including data protection law. With apps like Signal or WhatsApp, evidence often turns on forensic recovery from seized devices or linked accounts. These technical differences explain why the amount and type of evidence that companies release can vary so widely between cases, even where the alleged conduct appears similar

Self-destructing and disappearing messages

Apps that automatically delete or encrypt content, such as Snapchat and Telegram, pose particular challenges for investigators. Snapchat messages are designed to disappear shortly after viewing, and Telegram’s “Secret Chats” function deletes messages once opened. In both cases, deleted or short-lived data make it difficult to establish precisely what was viewed or shared.

These technical limits can work in the defendant’s favour, as prosecutors may struggle to recover full message histories or verify the context of an exchange. Allegations of image sharing or “sextortion” therefore often depend heavily on device-level forensics, screenshots, or third-party reports rather than direct platform evidence. Understanding how and when messages are deleted can be key to demonstrating that there is insufficient proof of possession or intent.

Final words on indecent images and social media

Investigations involving social-media platforms present genuine evidential challenges. Encrypted or disappearing content, overseas data storage, and inconsistent provider policies often make it difficult for the police to recover key material. In many cases, this absence of evidence works in the defendant’s favour, as the prosecution must still prove any alleged possession or distribution of indecent images beyond reasonable doubt. The rapid evolution of digital technologies – from encrypted messaging apps and cloud storage to AI-driven content detection – continues to outpace the ability of law and regulation to keep up, and the reliability of digital evidence will remain a central issue in these cases for the foreseeable future. For practitioners and defendants alike, careful analysis of how the alleged material was created, transmitted, and stored is now essential to any fair outcome.

If you are being accused of trading or viewing indecent images on social media, our Specialist Indecent solicitors can help. 

Reeds Solicitors is an award winning and leading top-tier criminal defence firm. For legal advice and representation, please contact us through our contact page here. Alternatively you can phone 0333 240 7373, or email us at [email protected].