How might AI change the way we advance human knowledge? Could it change how universities like Cambridge carry out one of their core functions: research? Could AI be a technological transformation unlike anything we’ve seen before?
A group of Cambridge researchers say that the answer to the last two questions is yes. But this is not because of some mysterious breakthrough or wishful thinking, but the relentless march of computational power. “What’s changed isn’t the methods – we’ve had most of those since the 1970s,” said Dr James Fergusson from Cambridge’s Department of Applied Mathematics and Theoretical Physics (DAMTP). “What’s changed is that we now have enough computing power and data to make them work.”
If Moore’s Law holds – as it has for the past 50 years – computing power will keep growing by a factor of 30 every decade. Even without smarter algorithms, hardware power alone will drive AI systems to become exponentially more capable and more embedded in our lives in the coming years.
“Most of the AI tools in widespread use today are essentially mimicking things humans already do, but just doing them faster,” said Fergusson. “We want to really push the maths of AI, so we can get it to do new things. We don’t want it to mimic data, but to tell you how it works and break it down. That’s where the real change will happen.”

Dr James Fergusson
In addition to his role in DAMTP, Fergusson is Director of the Infosys-Cambridge AI Centre, which was opened in the autumn of 2024 at Infosys’ London premises in Canary Wharf. The AI Lab is part of a wider University partnership with the multinational technology and consulting company, meant to make Cambridge’s cutting-edge AI research accessible to industry.
“In the past, companies have found it hard to navigate Cambridge,” said Fergusson. “We want to create a gateway: somewhere businesses can come to ask their questions, and find out what AI can really do for them, and we can learn what challenges they face. We hope this partnership with Infosys is a model of how that could be done.”
There are three main research themes in the AI Centre:
- AI-enhanced simulations for improving our understanding of physical systems, which will help AI ‘think’ like scientists, including initiatives such as the Polymathic AI project
- Mathematical AI, or theoretical physics for AI, which will look at how we take ideas from theoretical physics and use them to understand how neural networks work and how they learn; and how we extract knowledge from machine learning systems
- Agentic AI systems, a way of automating much of the process of scientific research, such as processing data, building software and writing things up.
Dr Boris Bolliet from Cambridge’s Cavendish Laboratory and Agentic AI Research Lead at the Infosys-Cambridge AI Centre is developing these custom multi-agent systems, based on large language models (LLMs) such as ChatGPT, Claude and Gemini. Bolliet and his team use multi-agent systems, such as CMBAgent and DENARIO, to plan and execute complex tasks, from financial simulations and cosmological data analysis, to autonomous research and paper writing.
“I believe that a lot of things are going to change in the way we do research,” said Bolliet, whose research background is in cosmology and computational astrophysics. “Maybe that means that a lot of repetitive, time-consuming tasks that I spend a lot of my energy on will soon be automated, giving me more time and space to do more interesting things.”

Dr Boris Bolliet
Multi-agent systems work by breaking complex problems into smaller tasks, verify their own outputs, and work like digital research assistants. They are more robust than single AI models because they can plan, review, and cross-check their own work.
The multi-agent model allows Bolliet to assign a role or even a ‘personality’ to each agent. For example, one agent could be a researcher and one an engineer, or one could be an idea generator and one could be a ‘hater’, relentlessly challenging and criticising to make the end product more robust.
Bolliet says that using these multi-agent systems, AI will not only be able to generate research but can also review and correct scientific literature at scale. And unlike humans, it will be able to seamlessly jump across academic fields, from astronomy to oncology, to find the best solutions.
“We want to use AI to accelerate the exchange of information across fields,” said Bolliet. “AI agents don’t have these barriers. They’re not stuck in one discipline like we human researchers are.”
Bolliet says that while we often think of LLMs as black boxes prone to hallucinations, using the multi-agent model allows him to check every step of the research process. “I can see every single step that has occurred and go through the code line by line,” he said. “I can reproduce the research entirely, which is not necessarily true when you talk to your colleagues and ask them what they did in their paper.”
Using tools such as multi-agent systems, the coming AI revolution is poised to replace many tasks that require human intelligence – everything from legal drafting to scientific research. For many people, this raises a worrying question: if machines can do our jobs, what’s left for us to do?
The answer, Fergusson says, is surprisingly hopeful. As AI systems increasingly take over routine and specialised tasks – writing code, analysing data, automating customer service – the most valuable commodity becomes something machines can’t yet replicate: original ideas.
“To paraphrase Edison, AI might handle 99% of the perspiration, but it still takes a human for that 1% of inspiration,” he said. “This shift could unlock extraordinary potential.”
For example, a student who has a great idea for an app who doesn’t know how to code can now describe their idea to an AI, and it will write the code, test it, and launch it. As implementation becomes easier, the emphasis will move from skills to creativity. The future, Fergusson says, will belong to those who can ask the best questions.
“There is a transformational opportunity to change the way scientific research is done using AI,” he said. “It will no longer be something only humans can do, but as something we do in collaboration with machines, that can carry out high-level scientific analysis, fast and accurately.”
The research happening at the AI Centre is also relevant to Infosys and its clients because, ultimately, the problems they are trying to solve are the same. How do we harness the enormous volumes of data at our fingertips into real knowledge?
“Many of Infosys’ clients want to have explainability,” said Fergusson. “They want to have simulations that can run faster and can be trusted. They want to use the power of multi-agent systems to automate business processes for knowledge processing tasks. Infosys is the connector between the centre and the business world, so this knowledge can be shared globally across all sectors.”
“The universality of the challenge of AI is bringing science and industry together – we all face the same challenges in adopting it and using it for our work.”
Of course, AI is far from perfect. The amount of water and energy used by the data centres powering most AI technology is gigantic, and risks derailing the progress humans are making towards achieving net zero.
And language models still make things up, or forget their own logic in longer responses. They’re more convincing than correct. But Fergusson says these flaws can be managed – especially with agent-based systems that check each other’s work and bring in outside data sources.
He also warns against viewing AI as purely hype or purely harmful. “Most people’s experience of AI is a chatbot that’s trying and failing to get you to buy a washing machine,” said Fergusson. “But under the surface, it’s changing everything, from scientific discovery to creative industries.”
The text in this work is licensed under a Creative Commons Attribution 4.0 International License
Source (original article) : Cambridge university