AIArtificial IntelligenceIn the News

Dozens of State Attorneys General Urge Congress Not to Block AI Laws

U.S. state attorneys general advocating for AI regulations

A bipartisan coalition of state attorneys general from 35 U.S. states and the District of Columbia has sent a clear message to Congress: do not interfere with states’ ability to regulate artificial intelligence (AI). In a recent letter, the group warned that blocking state-level AI laws could have serious consequences, leaving Americans vulnerable to scams, misinformation, risks to children, and other AI-related harms.


Rising State Efforts to Regulate AI

As AI technologies like generative models, chatbots, and automated decision-making systems have grown rapidly, many states have stepped in to fill what they see as a regulatory gap at the federal level. Over the past year, dozens of states have adopted or proposed AI-related laws aimed at:

  • Transparency and disclosure when AI is used
  • Restricting harmful deepfakes
  • Banning certain AI-generated explicit content
  • Safeguarding consumers from deceptive practices

Some regulations specifically focus on protecting children from AI chatbots that might engage them in inappropriate conversations. Others aim to curb misinformation and scams propagated through AI-generated content. The growing patchwork of state laws highlights both the urgency and the variety of AI-related risks states are addressing.


What Congress Is Considering — And What the Attorneys General Oppose

At the center of the debate is proposed legislation that would preempt states from enacting or enforcing AI regulations for a set period.

Supporters argue that a national standard is necessary to:

  • Prevent conflicting rules that could slow innovation
  • Reduce compliance burdens for companies operating across multiple states
  • Ensure a unified federal framework to boost U.S. competitiveness in AI

Opponents, led by the coalition of attorneys general, strongly disagree. They argue that broad preemption, especially without comprehensive federal AI regulation, would strip away vital state protections without meaningful alternatives. States are often closer to the needs of their residents and better positioned to respond quickly to emerging AI risks.

The coalition stresses that states must retain the authority to enact and enforce their own AI rules. Blocking state-level oversight could create a regulatory void, leaving the public exposed to harms the federal government may not be ready to handle.


Why States Warn of “Disastrous Consequences”

The attorneys general highlight several specific risks if state regulation is blocked:

  • Threats to children’s safety: AI chatbots and companion tools have sometimes engaged minors in inappropriate conversations. Without safeguards, such interactions could increase.
  • Proliferation of scams and deepfakes: Malicious actors could exploit AI-generated voices, videos, or automated systems to commit fraud, mislead voters, or spread disinformation.
  • Lack of consumer protections: AI is increasingly used in hiring, lending, healthcare, and content recommendations. States argue they can best ensure fairness and transparency.
  • Absence of alternative safeguards: Without state enforcement and lacking robust federal regulation, Americans could face AI without oversight.

The coalition further asserts that such preemption would undermine the traditional balance of federalism, weakening states’ ability to protect residents from technological threats.


Broader Implications for AI Governance

This debate highlights a deeper struggle in the U.S. over balancing innovation with public safety:

  • Proponents of a federal-only approach say conflicting state rules could slow progress and increase costs for companies.
  • State and consumer advocates argue that safety, transparency, and accountability must not be sacrificed for innovation.

Because AI is evolving rapidly and its applications differ across regions, state-level regulation provides flexibility that federal legislation may not match. Removing state protections before federal standards are established could create a dangerous regulatory vacuum.

The attorneys general advocate for a layered approach, where federal and state oversight operate together. National standards would ensure baseline protections, while states could tailor rules to local needs.


What’s Next

As Congress debates budget and legislative bills, the fate of the proposed preemption remains uncertain.

  • If Congress approves the preemption: Existing and pending state laws could become ineffective, weakening protections in states that have already invested heavily in regulating AI.
  • If Congress allows state regulation to continue: A diverse array of AI rules reflecting regional priorities would emerge, promoting innovation and safety but adding some complexity for nationwide businesses.

For ordinary Americans, the stakes are high. The outcome could determine whether there are legal safeguards against AI-driven fraud, harmful content, deepfakes, and biased decision-making, or whether the public faces these risks largely unprotected.

The attorneys general’s letter underscores the urgency of the issue: as AI reshapes daily life—from social media and finance to healthcare and public safety—who governs these technologies will shape the social contract around AI in America for years to come.

Leave a Response

Prabal Raverkar
I'm Prabal Raverkar, an AI enthusiast with strong expertise in artificial intelligence and mobile app development. I founded AI Latest Byte to share the latest updates, trends, and insights in AI and emerging tech. The goal is simple — to help users stay informed, inspired, and ahead in today’s fast-moving digital world.