OpenAI Plans Fully Automated AI Researcher by 2028

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

OpenAI has identified its next major research target: a fully automated AI system capable of tackling large, complex scientific and technical problems without human involvement.

According to the announcement, the company plans to release “an autonomous AI research intern” by September — a system that can independently handle tasks that would take a human researcher a few days to complete.

That intern is the first stage. By 2028, OpenAI plans to deploy a fully automated multi-agent research system it calls an AI researcher. The firm says this system will be able to address problems too large or complex for humans alone, spanning mathematics, physics, biology, chemistry, and business and policy challenges.

Jakub Pachocki, OpenAI‘s chief scientist, described the goal in direct terms: “I think we will get to a point where you kind of have a whole research lab in a data center.” He says the company is now close to building models that can work indefinitely in a coherent way, in the same manner people do, while humans continue to set the goals.

What Already Exists

Pachocki points to Codex, an agent-based tool OpenAI released in January, as an early version of what the AI researcher will eventually become. The tool can generate code, analyze documents, produce charts, and compile inbox and social media digests. The company says most of its technical staff now use it in their daily work.

“I expect Codex to get fundamentally better,” Pachocki said. The path to an automated researcher, he argues, runs through extended autonomous operation — systems that require less human guidance over longer stretches of time. He draws a direct line from GPT-3 in 2020 to GPT-4 in 2023 as evidence that broad capability gains naturally extend how long a model can work on a problem without assistance.

Doug Downey, a research scientist at the Allen Institute for AI with no affiliation to OpenAI, says the ambition is widely shared across the field. “There are a lot of people excited about building systems that can do more long-running scientific research,” he said, attributing much of that momentum to the demonstrated usefulness of coding agents. He framed the central question as whether those same capabilities can extend beyond coding into broader science.

The Competitive Context

OpenAI is pursuing this goal under real competitive pressure. Anthropic and Google DeepMind are both advancing agent-based tools — Anthropic has released Claude Code and Claude Cowork as comparable offerings.

The stated ambitions at the top of the AI industry are not modest. Anthropic CEO Dario Amodei has described building the equivalent of a country of geniuses in a data center. Sam Altman has said he wants to cure cancer. Demis Hassabis has cited solving humanity’s hardest problems as the founding purpose of DeepMind.

Pachocki’s position is that OpenAI now has most of what it needs to make that a reality — and that the AI researcher is the clearest path to getting there.

Photo by Polina Tankilevitch on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article