UN Report Calls for More Aggressive Efforts to Detect AI-Assisted Deepfakes

A new United Nations report has found that AI-based deepfakes are spreading so quickly the world simply isn’t ready for them, and immediately calls for a coordinated approach among governments, technology companies, and civil society to detect and fend off the growing threat.
The report, authored by the UN Office on Drugs and Crime (UNODC), illustrates how deepfakes—hyperrealistic synthetic media created through artificial intelligence—are ever more employed to deceivingly sway people’s opinions and beliefs, as well as to commit fraud worldwide.
As deepfakes continue to advance, growing in believability and accessibility, the report cautions that the potential harms to democratic processes, social trust, national security, and individual rights are significant. The UN’s urging of action highlights the importance of regulation on a global level, investment in detection technology, and increased public awareness in response to the threats posed by AI-produced misinformation.
Alarming Rise in Deepfake Usage
From the UNODC report: The past few years have seen an extreme rise in the use of AI-driven deepfakes. Created initially for the entertainment and creative industries, deepfakes are now being widely abused in all industries. These range from:
- Disinformation campaigns
- Cyberfraud
- Political manipulation
- Extortion
- Deepfake pornography (a growing concern in the past couple of years)
The report references a number of recent events that show just how such hazards could come to fruition with deepfakes:
- A fake video of a European politician making inflammatory comments began to circulate and went viral, leading to diplomatic fallout before it was debunked.
- A global company was taken for a multimillion-dollar ride after fraudsters employed a deepfake audio simulation of the CEO authorizing an illicit transfer.
“With the ability to create and circulate fake documents, photographs and videos using the internet and emerging technologies, terrorists, armed groups, and serious criminals are becoming more difficult to identify and bring to justice,”
—UNODC Representative
She added that the international community should act in a timely and unified manner to meet this challenge before democracy and people’s confidence are compromised.
Challenges in Detection and Regulation
One of the main problems the report identifies is the difficulty of identifying deepfakes as they become harder to distinguish from authentic content. Thanks to rapid progress in generative AI models—especially ones able to replicate facial expressions, voice dynamics, and body movements with great accuracy—even experts could have trouble distinguishing between altered and real media.
While some tech companies have started developing detectors, the report says they have proven to be:
- Of limited effectiveness
- Unable to keep up with the speed at which deepfake technology adapts and evolves
Moreover, national legal systems are not designed to cope with the particular challenges raised by synthetic media.
“Legislation is now piecemeal and out of date,” the report says.
“There is a pressing need for international cooperation to set minimum regulatory standards that can help account for the fact that dissemination of deepfakes knows no borders.”
Recommendations from the UN
The UN report proposes a multi-faceted response to addressing the spread of malicious deepfakes, with collaboration being a key part. Key recommendations include:
1. Investing in Deepfake Detection Technology
Governments and the private sector should invest in next-generation tools that can accurately verify when media has been manipulated. These tools may include:
- AI-supported detection models
- Prefix-based blockchain for content authentication
- Watermarking solutions
2. Establishing Global Regulatory Standards
The UN has demanded the creation of an international legal framework that:
- Makes it illegal to distribute AI-created material unlawfully
- Enforces punishments for those who misuse synthetic content
- Encourages standardized global provisions for enforcement
3. Public Awareness and Media Literacy Campaigns
The public must be educated about how to recognize and report deepfakes. The report calls for:
- Extensive awareness campaigns
- Public education efforts
- Integration of digital literacy in school curricula
4. Encouraging Ethical AI Development
AI model developers—especially those working on generative models—should:
- Follow ethical norms
- Embed watermarks in synthetic media
- Apply usage restrictions to prevent misuse
5. Strengthening Platform Accountability
Social media and content-sharing platforms should:
- Prescreen and remove deepfake content proactively
- Label AI-generated media when detected
Tech Industry Response
The UN’s report has prompted reactions from several tech giants. A joint statement from Meta, Google, and OpenAI supported the growing collaboration on deepfake detection:
“As developers of powerful generative AI tools, we have a responsibility to ensure these do not get used for malicious purposes,”
—Joint Statement
The companies announced ongoing investment in research to better understand deepfakes and synthetic media, particularly studying how manipulated content affects public perception.
In addition, companies introduced the “Content Provenance Coalition,” an initiative aimed at developing standards for tracking the origins of digital content.
However, the UN report insists that voluntary actions are insufficient and must be underpinned by enforceable rules.
Task – Ethical and Free Speech Dilemma
Though the report has been broadly welcomed, some digital rights advocates warn of potential overreach. They stress the need to balance deepfake control with freedom of speech and artistic expression.
“There’s always a very thin line between regulation and censorship,”
—Amara Singh, Digital Rights Researcher, International Center for Technology and Democracy
She adds:
“We support these boundaries to prevent harmful content, but not at the cost of individual freedoms. We must have transparency and accountability not just for the regulators themselves but for the platforms.”
The UN report recognizes these concerns, emphasizing that any regulatory response must respect international human rights standards, particularly regarding freedom of expression and privacy.
Looking Ahead
Deepfakes will remain a major challenge as AI continues to evolve and its societal impact deepens. The UN’s report acts as a wake-up call for the international community. Without urgent and united action, the line between truth and fiction may become hopelessly blurred.
“This is a moment of truth,”
—Mr. Waly, UNODC
“We have to be right on the curve—not just to catch up with the technology, but to stay in front of it. The future of our information ecosystem depends on how well we can adapt and wisely respond.”
In a time when seeing is no longer believing, credibility will depend not just on what we witness, but on the systems and safeguards that support it.



