Anthropic CEO Warns AI Companies: Be Transparent or Repeat History

In a stark warning for the artificial intelligence (AI) industry, Dario Amodei, CEO of Anthropic, urged AI companies to be upfront about risks — or risk repeating the mistakes of the tobacco and opioid industries. In a recent interview, Amodei highlighted the dangers of concealing harmful consequences, stressing that failing to confront AI’s risks openly could have serious consequences for society.
A Troubling Comparison
Amodei didn’t hold back. He emphasized that AI companies must “call it as you see it,” warning that hiding risks could mimic the behavior of past industries that downplayed the dangers of their products. Without honest public discussions and strong oversight, he argued, the AI sector could repeat the same mistakes that led to major public health crises.
The Stakes Are High — and Rapid
The pace of AI advancement is astonishing, according to Amodei. He noted that future AI systems could surpass humans in most or all capabilities, describing the phenomenon as a “compressed 21st century.” This rapid progress could revolutionize fields like medicine, making breakthroughs that might have taken decades happen in just a few years.
However, faster innovation comes with higher risks. AI’s growing autonomy — its ability to operate independently — could lead to unintended or dangerous outcomes. “The more autonomy we give these systems, the more we must ask if they are truly doing what we intend,” Amodei said.
Dual-Use Dangers: From Medicine to Bioweapons
Anthropic’s own testing reveals the dual nature of advanced AI. Logan Graham, head of the company’s safety and stress-testing team, explained that the same AI capabilities that accelerate research or medical discoveries could also be misused to design dangerous biological threats.
For instance, a powerful AI model could potentially assist in creating bioweapons — a chilling flip side to its ability to speed up vaccine development. According to Graham, acknowledging and managing these dual-use risks is essential and cannot be ignored in the rush to scale AI systems.
Real-World Incidents Highlight the Risks
Anthropic has openly shared examples where AI risks have materialized. In one instance, state-sponsored hackers used an autonomous AI model to carry out a cyberattack on roughly 30 global organizations. Alarmingly, a significant portion of this attack was conducted with minimal human intervention. Amodei points to such incidents as proof that transparency and independent stress-testing are critical safeguards for the AI industry.
Economic Impacts: Jobs at Risk
Amodei also highlighted the social and economic impact of AI. He warned that up to 50% of entry-level white-collar jobs, including accounting, law, and banking roles, could be replaced by AI within a few years. Without proper intervention, he said, the speed of change could overwhelm society, creating ripple effects on employment, wages, and inequality.
Call for Oversight and Transparency
Amodei has consistently called for mandatory safety testing, arguing that voluntary measures are insufficient. He advocates for national-level standards requiring companies to disclose testing procedures, risk assessments, and mitigation strategies. Transparency, he insists, is not only ethical but practical, allowing regulators and society to gauge whether AI systems are being managed responsibly.
Industry Response: Mixed Reactions
Not all industry leaders share Amodei’s views. Jensen Huang, CEO of Nvidia, criticized his predictions of widespread job loss as overly alarmist. Huang also opposed the idea that only a few companies should develop the most advanced AI, favoring more collaborative approaches.
Anthropic, however, clarified that Amodei never suggested exclusivity in AI development. Instead, he has consistently called for a universal transparency standard across the industry.
Walking the Talk: Responsible Scaling Policy
Anthropic practices its own principles through a Responsible Scaling Policy (RSP), designed to ensure AI safety as systems grow more powerful. Amodei admits it isn’t perfect but views it as a “forcing function” to ensure risk management remains a priority. He remains skeptical that all AI companies can self-regulate without strong external oversight.
A Warning and a Call to Action
Ultimately, Amodei’s message is both cautionary and aspirational. AI holds tremendous potential in medicine, research, and productivity, but ignoring its risks could have profound consequences.
He urges AI developers to acknowledge the technology’s darker side, adopt rigorous testing standards, and embrace public oversight — to prevent repeating the mistakes of industries that concealed their products’ dangers. How the AI industry responds could shape not just the future of technology, but the society that interacts with it.



