By 2028, OpenAI wants a machine that conducts science on its own. Not assists. Not accelerates. Conducts.
The San Francisco company has announced that building a fully automated AI researcher is its declared “north star” for the next several years, according to the announcement. The system, slated to arrive in 2028, would be a multi-agent architecture capable of tackling large, complex problems without human direction. Before that, the company plans to deploy an intermediate version: an “autonomous AI research intern” by September of this year, built to handle a small number of specific research tasks.
Jakub Pachocki, OpenAI’s chief scientist, outlined the plans in an exclusive interview this week.
The intern is the proof of concept. The 2028 system is the actual goal — a fully autonomous, multi-agent research engine. The gap between those two things, in ambition and in engineering difficulty, is enormous, and OpenAI is treating the intern as the first real stress test of whether the broader plan holds together.
One Company, Many Bets
The researcher announcement lands alongside a separate disclosure: OpenAI is building what it describes as a “super app,” merging ChatGPT, a web browser, and a coding tool into a single product, according to reporting from The Verge. Separately, the company is acquiring coding startup Astral to strengthen its Codex model, per Ars Technica. Both moves come as the company reportedly pulls back on peripheral side projects, and as it faces growing competitive pressure in the enterprise market, where Anthropic has gained ground, according to Axios.
The timing of these announcements, stacked together, paints a picture of a company narrowing its bets while simultaneously swinging at something far larger than any product it has shipped before.
Psychedelics and the Limits of Clinical Hype
Elsewhere in science this week, two new studies on psychedelic drugs have complicated a decade’s worth of enthusiasm. Compounds like psilocybin, found in magic mushrooms, have drawn serious research interest as potential treatments for depression, PTSD, addiction, and obesity. The new studies, according to a biotech newsletter analysis, expose structural difficulties in studying these substances at the clinical trial level — difficulties that suggest the field may have moved faster on expectations than on evidence.
The challenge is methodological. Psychedelic trials are notoriously hard to blind: participants generally know whether they received a placebo or an active compound, which distorts outcomes data in ways that are difficult to correct for. That flaw does not invalidate the research, but it does mean that the body of evidence supporting psychedelic therapies is harder to interpret than the volume of positive coverage has implied.
Two other stories circulating this week carry their own weight. Kalshi, the prediction market platform, raised $1 billion at a $22 billion valuation — double its valuation from December — even as Arizona’s attorney general charged the company with illegal gambling, according to Bloomberg and NPR respectively. And the Pentagon has flagged Anthropic‘s foreign workforce as a security risk, citing Chinese employees in particular, according to Axios.
The DoJ, meanwhile, dismantled botnets responsible for the largest distributed denial-of-service attack on record, seizing infrastructure that had infected more than 3 million devices.
Photo by Edward Jenner on Pexels
This article is a curated summary based on third-party sources. Source: Read the original article