Editorial illustration of an abstract AI research system in a futuristic lab, created for coverage of Sakana AI’s The AI Scientist and its Nature milestone.
For decades, the scientific paper has been treated as a distinctly human artifact – a product of curiosity, experimentation, and reasoning. That assumption is now under pressure.
In March 2026, Sakana AI announced that a paper describing its autonomous research system, The AI Scientist, was published in Nature. The publication follows an earlier result in which an AI-generated manuscript produced by The AI Scientist-v2 passed the first round of peer review at an ICLR 2025 workshop. Together, the two milestones mark one of the clearest signs yet that AI systems are beginning to participate meaningfully in parts of the research pipeline. Nature paper
What Is The AI Scientist?
The AI Scientist is an end-to-end agentic research system developed by Tokyo-based Sakana AI with collaborators at the University of British Columbia, the Vector Institute, and the University of Oxford. According to the Nature paper, the system can generate research ideas, write code, run experiments, analyze results, create plots, draft a manuscript, and perform its own review of the output. Nature paper
The newer AI Scientist-v2 extends that approach with a template-free setup and an agentic tree-search process for experimentation, allowing it to operate across multiple machine-learning domains with less human scaffolding than the original version. GitHub
The Two Milestones – and Why the Distinction Matters
One of the most important details in this story is that there were two separate achievements.
First, Sakana AI said that an AI-generated paper created by The AI Scientist-v2 passed peer review at the ICLR 2025 “I Can’t Believe It’s Not Better” workshop. The company says the submission was reviewed under an agreed experimental protocol with the workshop organizers and ICLR leadership, and that the accepted paper was later withdrawn before publication. Nature similarly reports that the AI-generated manuscript passed the first round of peer review for a workshop whose acceptance rate was 70%. Sakana AI announcement
Second, a human-authored paper about the system itself was then published in Nature. That article documents the architecture, evaluations, scaling behavior, and limitations of The AI Scientist. In other words, Nature did not publish an entirely AI-authored research paper as such; it published a paper by researchers describing an AI system that had previously passed workshop peer review in a separate experiment. Nature paper
Why It Matters
Even with the caveats, the result matters because it points toward partial automation of the research cycle in fields where experiments can be run computationally. The Nature paper argues that systems like this could accelerate scientific discovery if developed responsibly, especially in areas where rapid iteration matters. Nature paper
That does not mean AI can now independently replace scientists across disciplines. But it does suggest that, at least in machine learning, autonomous systems are getting better at moving from idea generation to experimentation to manuscript production in ways that were still speculative not long ago. Nature paper
The Caveats
The limitations are substantial. The Nature paper states that The AI Scientist “cannot yet meet the standards of top-tier publications nor even do so consistently for workshops.” It also lists common failure modes including underdeveloped ideas, weak methodological rigor, implementation errors, duplicated figures, and hallucinations such as inaccurate citations. Only one of three submitted AI-generated workshop papers was accepted. Nature paper
So this is best understood not as proof that AI has mastered science, but as evidence that AI research agents are becoming credible participants in narrower, computationally tractable parts of scientific work. Nature paper
