UK Regulators Warn: AI Chatbots Giving Risky Financial and Tax Advice

In recent weeks, consumer protection groups and financial regulators in the United Kingdom have raised serious concerns about the growing use of artificial intelligence (AI) chatbots for financial and tax guidance. These widely used tools — including ChatGPT, Microsoft Copilot, and Meta’s AI — are producing advice that is often inaccurate, misleading, or potentially harmful, even as more people turn to them as “digital financial advisors.”
Consumer rights organisations have reported multiple examples of AI chatbots giving incorrect financial guidance. Tests with dozens of consumer-style queries revealed a worrying trend — from wrong information about tax rules to advice that could expose users to regulatory or contractual risk.
Mistakes With Money: What Went Wrong
Some errors were glaring:
- ISA Allowances: ChatGPT and Microsoft Copilot misinterpreted questions about Individual Savings Account (ISA) limits, giving advice that could lead users to exceed HMRC rules.
- Travel Compensation: AI chatbots provided flawed guidance on claiming compensation for delayed flights.
- Insurance Misinformation: Incorrect statements were made about travel insurance requirements for UK citizens visiting the EU.
- Tax Refund Guidance: Some AI systems directed users to paid tax-refund services instead of the official HMRC service, potentially leading to unnecessary fees.
Why the Risk Matters
The concern isn’t just occasional mistakes. AI systems often present information confidently, which can mislead users who aren’t financially savvy. Since these chatbots are not regulated as financial advisers, their advice isn’t covered by protections like the Financial Ombudsman Service or the Financial Services Compensation Scheme.
AI also cannot tailor advice to individual circumstances. Financial decisions depend on factors like income, debts, tax situation, risk tolerance, and long-term goals. Generic suggestions may overlook these critical elements, increasing the risk of costly errors.
Regulatory bodies are urging the Financial Conduct Authority (FCA) and HM Treasury to consider bringing AI chatbots under regulatory oversight. Without proper supervision, unregulated AI could become a “wild west” scenario for financial advice.
Voices From the Ground
Real-world experiences highlight these risks:
- Self-employed individuals seeking tax guidance have reported receiving outdated or incorrect information, forcing them to double-check and correct outputs.
- Surveys indicate that over half of UK adults have used AI platforms for financial advice at some point, often for budgeting, pensions, investments, or tax queries.
While AI may seem convenient and cost-effective, poor guidance can lead to tax penalties, investment losses, or missed opportunities, outweighing any savings.
Regulators Respond
The FCA emphasizes that general-purpose AI tools are not regulated like professional financial advisers. If someone acts on advice from a chatbot and suffers a loss, there is usually no compensation available.
Experts are calling for:
- Reclassifying AI tools under financial regulatory standards.
- Greater transparency, with clear warnings about AI “hallucinations” — instances when AI confidently generates inaccurate information.
Government Experiments With AI
The UK government has experimented with AI chatbots for public-facing queries, including tax questions. These pilots are experimental, and users are warned that AI responses may be inaccurate. The chatbots themselves advise users to verify answers against official sources before taking action.
The Solution Debate
Experts recommend treating AI financial advice as a starting point, not a final authority. Chatbots can explain concepts or present options, but important decisions should always be validated with official guidance or qualified professionals.
Regulated AI services in legal and financial sectors demonstrate a safe integration model. These systems combine AI efficiency with professional oversight, and industry groups suggest similar frameworks for financial AI to ensure accuracy, safety, and compliance.
Conclusion
AI chatbots are becoming central to how people manage money, but UK regulators’ warnings underline a simple truth: convenience doesn’t always mean correctness. Users must stay cautious, and policymakers need to ensure that AI-driven advice does not slip into a regulatory grey area.
For now, the takeaway is clear: treat AI advice as a helpful sketch, not a binding blueprint. Always double-check with trusted sources or professional advisers before making financial decisions that could impact your well-being.



