By Mariefaye (Efthimia) Bechrakis, Esq.
Introduction
Generative AI is often praised for its many transformative benefits across various fields, including that of law and human rights. The dual -use nature of the technology, however, has produced urgent harms, most notably the rapid rise of AI-generated synthetic child sexual abuse material (CSAM). Reports of AI generated images sexualizing children skyrocketed from roughly 4,700 in 2023 to 440,000 in the first half of 2025, while confirmed illegal AI-generated videos jumped from just two to over 1,200 in the same period. According to UNICEF, at least 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes in the past year.
Understanding Synthetic CSAM
Synthetic CSAM fundamentally challenges traditional legal frameworks, which are built around the protection of identifiable, “real” victims. Unlike conventional child pornography, AI-generated material may involve no real child at all, yet its harms—psychological, reputational, and societal—are very real. Synthetic CSAM not only facilitates grooming, coercion and sexual exploitation, it also poses practical challenges to law enforcement, as AI- generated images flood investigative systems with vast amounts of realistic, yet synthetic materials, making it harder for authorities to identify and prioritize cases involving real children.
While some domestic laws are adapting to explicitly criminalize AI- generated child sexual abuse content, international law lags, leaving urgent questions about how existing frameworks can confront crimes born not of physical acts, but of code, computation, and borderless digital production.
The Domestic Response: A Patchwork of Progress
To their credit, several jurisdictions have moved aggressively to modernize their criminal codes to explicitly criminalize AI- generated, synthetic, images where there is no identifiable victim.
In the United States, the FBI has warned that synthetic material is prosecutable under federal law when indistinguishable from real minors, supported by bipartisan efforts like the Enhancing Necessary Federal Offenses Regarding Child Exploitation (ENFORCE) Act. At the state level, Texas Senate Bill 20 has set a precedent by explicitly criminalizing the possession of computer-generated depictions of minors in sexual conduct.
Across the Atlantic, the EU parliament adopted its position to bring EU laws up to date with technological developments by explicitly criminalizing the use of AI systems “designed or adapted primarily” for CSAM crimes. Cyprus became the first EU member state to explicitly criminalize the creation and distribution of AI-generated CSAM through targeted criminal code amendments. Notably, the UK became the first country to create new AI sexual abuse offenses to protect children from predators generating AI images.
Australia’s Criminal Code Amendment (Deepfake Sexual Material) Act 2024 explicitly criminalizes AI-generated sexually explicit material and non-consensual deepfakes, strengthening Australia’s legal framework against synthetic child sexual abuse content. Several Asian countries are planning to amend their criminal codes to criminalize the use of AI generated deepfake pornography, particularly when minors are involved, reflecting an ongoing effort to keep pace with rapidly evolving technology. Some Middle Eastern countries, such as the UAE, have introduced sweeping digital safety and AI guidelines under their broader AI/child protection frameworks that tighten controls on personal data and restrict key uses of generative AI, in an attempt to protect minors.
Beyond expanding criminal definitions to capture AI generated synthetic images, some jurisdictions are now directly scrutinizing platform responsibility. For example, Ireland recently launched an investigation into the capacity of AI generative system- Grok, developed by Elon Musk’s company XAI- to generate sexualized content, often depicting minors. In response to the same, the UK has strengthened legal duties, requiring platforms to remove non- consensual intimate images, including AI- generated material.
While these domestic reforms indicate that countries are moving beyond rhetoric, and toward meaningful action, the decentralized, anonymous, and borderless nature of AI-generated material means that national precautions risk becoming ineffectual in the absence of a harmonized international framework.
The International Gap: Laws of a Bygone Era
On the international stage, traditional child protection laws were never designed to anticipate hyper-realistic, AI generated CASM. This leaves international law, as it stands, largely ill-equipped to address distinct harms and cross-border enforcement challenges posed by synthetic CSAM. To date, no international legal framework explicitly tackling child abuse material has incorporated provisions covering AI- generated content. To illustrate, the Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution and child pornography, which criminalizes the “sale of children, child prostitution and child pornography,” does not explicitly address AI- generated images. Rather, it obliges States Parties to prohibit the production, distribution, and possession of child pornography, broadly defined as any representation of a child engaged in explicit sexual activities.
Similarly, the Convention on Cybercrime (Budapest Convention) that requires States Parties to criminalize computer-related child pornography offences, including the production, offering, distribution, procurement, and possession of child sexual abuse material via computer systems, omits any explicit reference to synthetic or AI-generated imagery. The Lanzarote Convention defines child pornography as “any material that visually depicts a child engaged in real or simulated sexually explicit conduct” or images of a child’s sexual organs for primarily sexual purposes. Although this formulation arguably comes closer to encompassing certain digitally altered or “realistic” depictions, the Convention does not expressly address wholly AI-generated content that does not involve a real child. More concerning still, the Convention’s opt-out clause allows States to reserve the right not to criminalize certain forms of “simulated” or “realistic” representations, thereby introducing further variability and fragmentation in domestic implementation.
Conclusion: Legal recommendations moving forward
International law is struggling to keep up with the rapid rise of AI-generated CSAM. While updating International Criminal Law to explicitly criminalize synthetic images is essential, and a positive step in the right direction, reform alone isn’t enough. A meaningful response requires operational coordination and legal harmonization, on an international level. Under an international framework that explicitly criminalizes AI generated CSAM, states should adopt standardized evidence-handling protocols, provide cross-border training, and establish joint task forces to support countries with limited technical capacity. Legal reforms must expand criminal definitions to cover AI-generated CSAM and extend liability to developers and platforms. They should also mandate safety-by-design safeguards, harmonize enforcement across jurisdictions, and use technology-agnostic language capable of adapting to future innovations. Without coordinated international action that combines clear law, liability frameworks, and operational capacity, legal responses will remain fragmented—leaving the world’s most vulnerable children exposed to the consequences of technological “progress”.