Sam Altman Says OpenAI Will Have a ‘Legitimate AI Researcher’ by 2028

By [Author Name], Technology Correspondent
A Bold Vision for the Future of AI
In a prediction that could redefine how humans and machines work together, OpenAI CEO Sam Altman has revealed that by 2028, the company expects to create a “legitimate AI researcher” — a system capable of independently conducting scientific research, forming hypotheses, and expanding human knowledge without constant human direction.
Altman’s statement, shared at a recent technology conference, underscores OpenAI’s growing confidence in the rapid evolution of artificial intelligence, especially in autonomous reasoning and scientific discovery. If achieved, this milestone could mark a historic moment — when machines begin to play an active role in generating entirely new knowledge.
From Assistant to Independent Thinker
“Right now, AI can summarize papers, generate ideas, and even simulate certain kinds of research workflows,” Altman said. “But by 2028, we expect to see a system that can design and execute research projects at a level comparable to a skilled human scientist — one that we could reasonably call a legitimate researcher.”
Today’s models — like GPT-4 and GPT-5 — excel at analyzing data and generating coherent content but still rely on human supervision. Humans define their goals, provide context, and judge their results. What Altman envisions is a system that takes initiative:
- Identifying gaps in current scientific understanding
- Proposing new hypotheses
- Designing experiments to test them
- Interpreting the outcomes independently
This kind of AI would represent a massive leap beyond current generative technology, combining reasoning, creativity, and strategic planning in a single system.
The Next Leap in AI Research
OpenAI’s journey toward more autonomous AI systems has been unfolding for years. From the launch of ChatGPT in 2022 to the evolution of the multimodal GPT-5, each generation has broadened what AI can do — from writing and coding to analyzing complex data and assisting with advanced workflows.
A “legitimate AI researcher” would take that evolution to the next level. It wouldn’t just assist scientists — it would act as one. Altman describes this as part of the broader march toward Artificial General Intelligence (AGI) — AI that can perform a full range of intellectual tasks as well as, or even better than, humans.
According to insiders, OpenAI has already begun internal experiments with autonomous research agents — early prototypes trained in domains like molecular biology, materials science, and mathematics. These systems can read thousands of scientific papers, pinpoint gaps in existing research, and even suggest potential experimental approaches.
One OpenAI engineer described the project as “a first step toward an AI that can make real discoveries — not just predict what’s in the data, but ask new questions.”
Transforming Science and Innovation
If OpenAI’s vision becomes reality, the impact on global science could be revolutionary. An AI capable of connecting insights across vast fields could accelerate discovery at unprecedented speed.
Imagine an AI that can:
- Review millions of research papers in hours
- Identify hidden links between unrelated studies
- Generate new hypotheses that humans might overlook
Such capabilities could reshape industries — from drug discovery and climate research to engineering and quantum physics.
In pharmaceuticals, for instance, an AI researcher could model how chemical compounds interact with the human body, simulate outcomes, and design new drugs — potentially slashing years off the research cycle. In physics or math, it could explore theoretical questions that humans haven’t yet conceived.
Yet this power also raises important ethical and ownership questions:
- Who owns the discoveries made by AI?
- Who is accountable for its outcomes?
- How do we prevent AI from exploring dangerous or unethical research directions?
The Ethics and Safety Challenge
Altman has consistently emphasized that AI development must remain safe, transparent, and aligned with human values.
“If we get this right,” he said, “it could accelerate human progress beyond anything we’ve seen. But if we get it wrong — if we build something that pursues goals misaligned with our intentions — the consequences could be serious.”
To address these risks, OpenAI is heavily investing in AI alignment research — ensuring that advanced systems understand and respect human values and ethical constraints. The company also supports global cooperation and regulatory frameworks for safe AI deployment.
Experts, however, remain divided on Altman’s ambitious timeline.
- Dr. Emily Zhang, a computational scientist at Stanford, cautioned that “true scientific reasoning involves judgment, skepticism, and uncertainty — qualities that are still uniquely human.”
- Dr. David Romero of ETH Zurich disagreed, noting, “Given how quickly AI has advanced, a system capable of genuine research contributions by 2028 isn’t far-fetched.”
OpenAI’s Larger Ambition
Altman’s forecast fits neatly within OpenAI’s broader strategy to lead the global race toward AGI. The company continues to expand its infrastructure, partnerships, and developer ecosystem to support increasingly capable AI systems.
Recent initiatives — from embedding GPT models into everyday tools to collaborating with research institutions — highlight OpenAI’s mission to make AI an essential part of society’s knowledge engine.
The “legitimate AI researcher” vision may be the ultimate expression of that mission — a system that not only supports human intelligence but also extends it.
The Dawn of Human–Machine Collaboration
Whether or not OpenAI reaches this milestone by 2028, the direction is clear: AI is evolving from a tool to a collaborator.
Altman ended his remarks with a forward-looking note:
“The most exciting future isn’t one where machines replace scientists, but one where they work beside us — exploring the unknown together.”
If OpenAI achieves its goal, the next great scientific revolution may not be driven by humans alone but by humans and machines working hand in hand — pushing the boundaries of discovery together.
A “legitimate AI researcher” would not just mark a breakthrough in technology — it would mark a turning point in human history, ushering in a new age of shared intelligence and limitless exploration.



