Should universities take greater control of AI research?

Illustratie van AI robot en mensen

The development of AI is currently largely in the hands of major American tech companies, and that is a bad thing, according to leading AI experts such as Ann Dooms and Luc Steels. As a result, we have become far too dependent on those companies and the development of AI is being steered in a single direction. According to them, it was precisely the many detours that research took in the past that led to remarkable results. In their book History of Ideas in the Science of AI, they also argue that universities should continue to play an important role in AI research.

Ask VUB professor Ann Dooms about the history of AI and you are treated to a thorough explanation of mathematics, mechanics and philosophy, in which the Babylonians, Leonardo da Vinci, Blaise Pascal, Charles Babbage and Ada Lovelace all make an appearance, eventually leading to Alan Turing. In his famous 1950 article Computing Machinery and Intelligence, he proposed the so-called Turing test. In that experiment, a conversation is used to determine whether one is communicating with a human or a machine. If the difference can no longer be detected, one could say that the machine displays intelligent behaviour. But according to Dooms, that test is often misinterpreted today.

“The Turing test was never meant as a competition like ‘Who has the smartest computer?’,” she says. “It was more a pragmatic way of addressing a philosophical problem. Because in reality, we still don’t know exactly what ‘thinking’ means.”

It is one of many moments when a scientist unintentionally made a significant contribution to the development of AI. “Ada Lovelace, for example, was looking for a method to automate the work of calculating machines. At the time, you still had to carry out quite a few steps in such calculations yourself. At one point she became inspired by the punched cards of the Jacquard loom, which were used to automatically create patterns in textiles. In doing so, she essentially invented programming.”

Gold at the mathematics olympiad for… Google

The development of AI has accelerated rapidly in recent years due to the growth of computing power and the rise of the internet, which gives us access to enormous amounts of data. Specialised graphics chips, originally developed for the gaming industry, also play a crucial role. They make it possible to train AI systems with millions of examples in a relatively short time. This eventually led to the large language models that power generative AI today, but according to Dooms this still remains mainly a statistical approach to language. “These models recognise patterns in huge amounts of text,” she says. “But they do not yet understand language in the way humans do.”

“Generative AI can produce impressive results, but true understanding, reasoning and insight remain largely unresolved challenges.”

The progress currently being made in the development of AI is phenomenal. Some AI systems, for example, take part in mathematics olympiads and even win gold medals. “That is possible with so-called neurosymbolic AI: systems that discover patterns in data, such as language, while also being able to reason using formal rules. An LLM can excel here by identifying patterns in masses of solved exercises, allowing it to propose a solution method, while at the same time verifying it through formal calculations. But that does not mean computers are already smarter than humans. Human intuition still works in a fundamentally different way, because human participants achieve the same results with far less training and computing power. Generative AI can produce impressive results, but true understanding, reasoning and insight remain largely unresolved challenges. There is still a lot of work to be done in that area, but if we focus solely on further optimising LLMs, we will never get there.”

Language as an ant trail

According to Dooms, the further development of artificial intelligence should look at the way humans solve mathematical problems. Mathematicians often find solutions by inventing new mathematics. And there is a parallel here with how language evolves.

This brings us to Remi van Trijp, one of the other authors of the book. Van Trijp is a researcher at the Sony Computer Science Laboratories in Paris, but was previously affiliated with the Vrije Universiteit Brussel. For more than twenty years he has been studying the relationship between language and artificial intelligence, a line of research originally started by AI pioneer Luc Steels at the VUB.

Van Trijp sees language as a so-called emergent system: a system that arises from local interactions between individuals. “A good example is how ants form a trail,” says Van Trijp. “No single ant decides where the trail should go. Through local interactions, a global structure emerges spontaneously. Language works in a similar way; people develop words, rules and meanings through continuous communication with one another. That is how language constantly evolves.”

“By asking different questions from the big technology companies, we may be able to find the next breakthroughs.”

Both LLMs and neurosymbolic AI fall short in that respect, he argues. “Language technology always tries to freeze language, while language is constantly evolving. That is why such systems will always fall short compared with real people. What we really need to understand are the processes that continually change language.”

And that, of course, requires fundamental research. “By asking different questions from the big technology companies, we may be able to find the next breakthroughs.”

After the hype: the disappointment

Luc Steels, one of the leading AI experts our country has produced, also sees an important role for universities in the development of AI. Remarkably, Steels began his career as a student of language and literature. Computers were still rare at the time, but Steels saw their research potential, particularly in computer language processing.

One of the first milestones he remembers is a system developed by MIT researcher Terry Winograd in the early 1970s. His program could understand natural language and carry out commands in a simple virtual world. “You could type something like ‘pick up the red block’ and the system would make sure a robotic arm actually picked up the correct block. Today that sounds simple, but at a time when computers still worked with punched cards, it was revolutionary,” Steels explains.

While AI was long developed mainly in universities, the centre of gravity now lies with large technology companies. Generative AI in particular is developed almost entirely by industrial players. According to Steels, there is a clear reason for that: scale. “The techniques themselves have often existed for decades, but only with enormous amounts of data and computing power can they truly be scaled up. That requires huge data centres, and that infrastructure demands investments on a scale far beyond what universities can afford.”

“Generative AI systems that hallucinate are not a temporary problem; it is built into the technology.”

Many companies hope that generative AI will eventually lead to Artificial General Intelligence (AGI): systems that perform as well as, or better than, humans across all domains. “Whoever can develop such a technology potentially controls a large part of the economy,” he says. “That is why companies are now investing billions. But it is far from certain that this promise can actually be realised.”

According to Steels, the current AI wave shows characteristics of earlier technological hypes. “There will probably be another period of disappointment,” he predicts. “We have seen this before, for example with the expert systems of the 1980s. An important problem with generative AI is hallucinations: systems that produce convincing-sounding but incorrect information. That is not a temporary problem; it is built into the technology.”

According to Steels, universities still have a crucial role to play: not in building the largest AI models, but in exploring new ideas. “Universities need to look twenty years ahead,” he says. “Not at what already works today, but at what may become possible tomorrow. If we in Europe choose the right strategy and invest in research, we can once again play a leading role in the next generation of AI.”

More about "History of Ideas in the Science of AI"

History of Ideas in the Science of AI was developed within the Willy Calewaert Chair of deMens.nu, awarded to VUB emeritus professor and AI pioneer Luc Steels. It is published by VUBPress and is available in physical form from the authors and digitally at Zenodo, Apple Books, Google Books.

Ann Dooms is professor at the VUB Department Wiskunde & Data Science (WIDS), where she currently leads the research group Mathematics & Data Science (MADS). Her expertise lies in Digital Mathematics (DIMA), the theoretical foundation for pattern recognition by computer, with applications in medical image processing, document and painting analysis, cryptography, and artificial intelligence. Dooms is active in various board positions including chair of the European Mathematical Society's Education Committee, vice-chair of the scientific council of BELSPO and of Defense, and she regularly writes about mathematics for a broad audience.  

Portret Ann Dooms

Luc Steels is a Belgian pioneer in artificial intelligence and professor emeritus at the Vrije Universiteit Brussel. He founded one of the first AI laboratories in Europe and gained international renown for his research into robotics, language and artificial agents. His work on the evolution of language and emergent communication is considered groundbreaking worldwide. 

Portret Luc Steels

Remi van Trijp is a researcher at the Sony Computer Science Laboratories in Paris and a former researcher at the Vrije Universiteit Brussel. His work lies at the intersection of artificial intelligence, linguistics and complex systems. He studies how language can emerge and evolve through interaction between artificial agents, building on the research tradition on emergent language introduced by Luc Steels.

Portret Remi Van Trijp

In this article:

  • Why are AI experts concerned about who is currently in control of AI development?
  • Why do current AI systems fall short when it comes to genuine understanding and reasoning?
  • Are we at risk of hitting a dead end if we continue to rely on the same type of AI models?