Dutch Watchdog Warns Voters: Don’t Rely on AI Chatbots for Election Info

As the Netherlands gears up for its national election, the country’s top digital watchdog has issued a timely warning: don’t rely on AI chatbots for political advice.
Officials say tools like ChatGPT, Google’s Gemini, and other conversational bots could spread misinformation, biased answers, or outdated data—potentially steering voters in the wrong direction. The warning highlights growing global concerns about how artificial intelligence might distort democracy in the digital age.
A Wake-Up Call in the Digital Era
In a statement this week, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, or AP) cautioned citizens to be skeptical when using AI chatbots to learn about political parties, policies, or voting guidance.
These systems, the agency explained, are not reliable sources. They can sound confident and persuasive, but their answers may be “incorrect, incomplete, or manipulative.”
“AI chatbots can sound convincing, but they don’t think or verify like humans,” said Aleid Wolfsen, chairman of the AP. “They’re built on data that might be biased or outdated. During elections, that’s dangerous.”
This warning comes as European governments grow increasingly uneasy about AI’s potential impact on free and fair elections. With several EU nations—and the European Parliament—heading to the polls soon, regulators are racing to safeguard public trust and prevent technology from blurring the line between truth and deception.
How Misinformation Slips Through
AI chatbots analyze massive amounts of text and generate human-like responses. While that makes them useful for learning or creative work, it also means they can mix facts with fiction.
Sometimes, they “hallucinate”—producing false or misleading information that sounds perfectly plausible. In an election, that could mean:
- Inventing political promises or party positions
- Misquoting politicians
- Misrepresenting policy debates
“Because these tools sound authoritative, people often trust what they read,” said Dr. Marieke de Vries, a political communication expert at Utrecht University. “A wrong answer about something like climate policy could easily shape opinions without users realizing it.”
Another concern is data bias. AI systems are trained on large datasets that might include propaganda or unreliable online sources. Without transparency about where this data comes from, voters could unknowingly absorb distorted or slanted views.
Protecting Election Integrity
The Netherlands has long been known for its firm stance on digital privacy and ethical tech use. The AP’s latest warning is part of broader efforts to protect democratic integrity from AI-related manipulation.
Officials are especially worried about AI-generated propaganda, such as:
- Deepfake videos
- Fabricated news stories
- Chatbots designed to promote certain candidates
Even well-intentioned users could unintentionally share misleading AI-generated information.
“Democracy depends on trust and informed choice,” said Wolfsen. “When information is filtered through algorithms we can’t see or understand, that trust is at risk.”
The watchdog isn’t banning AI outright—it’s encouraging critical awareness. Voters are urged to rely on official party websites, verified government portals, and reputable media outlets for information.
Tech Giants Under Pressure
As generative AI tools spread, major tech companies are under increasing scrutiny. OpenAI, Google, and Anthropic have introduced disclaimers and filters to block political endorsements.
Still, loopholes remain. Users can easily prompt chatbots to simulate political debates, write campaign slogans, or compare party policies—all of which can blur ethical lines.
European regulators are responding with the EU Artificial Intelligence Act, which will require greater transparency from high-risk AI systems. While the act primarily targets facial recognition and algorithmic bias, its principles could soon extend to political chatbot usage.
Public Awareness and Responsibility
Dutch political parties have largely welcomed the watchdog’s warning. Many say it’s a necessary reminder to promote media literacy and digital education.
Some have even called for clearer labeling of AI-generated content and more openness from AI developers about their data sources.
“Technology should empower voters, not mislead them,” said one party spokesperson. “We must ensure innovation supports democracy, not manipulates it.”
The government is also investing in public education campaigns, such as Digital Truth Week, which teaches citizens how to spot fake content, verify sources, and think critically about what they see online.
A Global Challenge
The Netherlands is far from alone in facing this issue. Around the world, governments are waking up to the dangers of AI in elections:
- United States: The Federal Election Commission is reviewing rules on AI-generated campaign ads.
- India: The Election Commission has warned parties not to use deepfakes or AI voices in campaigns.
- United Kingdom: Studies show AI tools often produce biased summaries of candidates’ policies.
These cases reveal how easily AI can blur the boundaries between truth and fabrication, raising urgent questions about transparency and accountability.
A Call for Critical Thinking
The Dutch watchdog’s message is simple but powerful: AI is helpful—but not always honest.
“Convenience doesn’t equal truth,” Wolfsen reminded voters. “Always double-check facts, look for trusted sources, and think critically about what you read online.”
As election day nears, citizens are being urged to rely not on algorithms, but on their own judgment. In the end, democracy thrives not on automation—but on informed, human decisions.



