WEF: Here’s how experts see AI developing over the coming years
by the World Economic Forum | Feb 16, 2023
Will we develop AI machines that surpass human intelligence in all areas, or is this still the stuff of sci-fi fantasy?
This World in Data chart shows the responses of experts from recent studies on the subject.
There is significant disagreement between those working in the field about how and when human-level AI will be developed.
But 90% believe it’s plausible that the AI transformation, one of the biggest shifts in humanity’s history, could happen within the next 100 years.
Artificial intelligence (AI) that surpasses our own intelligence sounds like the stuff from science-fiction books or films. What do experts in the field of AI research think about such scenarios? Do they dismiss these ideas as fantasy, or are they taking such prospects seriously?
A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. It would be a machine that is “able to learn to do anything that a human can do”, as Norvig and Russell put it in their textbook on AI.
It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions. It would be able to do the work of a translator, a doctor, an illustrator, a teacher, a therapist, a driver, or the work of an investor.
In recent years, several research teams contacted AI experts and asked them about their expectations for the future of machine intelligence. Such expert surveys are one of the pieces of information that we can rely on to form an idea of what the future of AI might look like.
This chart shows the answers of 356 experts. This is from the most recent study by Katja Grace and her colleagues, conducted in the summer of 2022.
Experts were asked when they believe there is a 50% chance that human-level AI exists. Human-level AI was defined as unaided machines being able to accomplish every task better and more cheaply than human workers. More information about the study can be found in the fold-out box at the end of this text.
Each vertical line in this chart represents the answer of one expert. The fact that there are such large differences in answers makes it clear that experts do not agree on how long it will take until such a system might be developed. A few believe that this level of technology will never be developed. Some think that it’s possible, but it will take a long time. And many believe that it will be developed within the next few decades.
As highlighted in the annotations, half of the experts gave a date before 2061, and 90% gave a date within the next 100 years.
Other surveys of AI experts come to similar conclusions. In the following visualization, I have added the timelines from two earlier surveys conducted in 2018 and 2019. It is helpful to look at different surveys, as they differ in how they asked the question and how they defined human-level AI. You can find more details about these studies at the end of this text.
In all three surveys, we see a large disagreement between experts and they also express large uncertainties about their own individual forecasts.
What should we make of the timelines of AI experts?
Expert surveys are one piece of information to consider when we think about the future of AI, but we should not overstate the results of these surveys. Experts in a particular technology are not necessarily experts in making predictions about the future of that technology.
Experts in many fields do not have a good track record in making forecasts about their own field, as researchers including Barbara Mellers, Phil Tetlock, and others have shown. The history of flight includes a striking example of such failure. Wilbur Wright is quoted as saying, “I confess that in 1901, I said to my brother Orville that man would not fly for 50 years.” Two years later, ‘man’ was not only flying, but it was these very men who achieved the feat.
Additionally these studies often find large ‘framing effects’, two logically identical questions get answered in very different ways depending on how exactly the questions are worded.
What I do take away from these surveys however, is that the majority of AI experts take the prospect of very powerful AI technology seriously. It is not the case that AI researchers dismiss extremely powerful AI as mere fantasy.
The huge majority thinks that in the coming decades there is an even chance that we will see AI technology which will have a transformative impact on our world. While some have long timelines, many think it is possible that we have very little time before these technologies arrive. Across the three surveys more than half think that there is a 50% chance that a human-level AI would be developed before some point in the 2060s, a time well within the lifetime of today’s young people.
The forecast of the Metaculus community
In the big visualization on AI timelines below, I have included the forecast by the Metaculus forecaster community.
The forecasters on the online platform Metaculus.com are not experts in AI but people who dedicate their energy to making good forecasts. Research on forecasting has documented that groups of people can assign surprisingly accurate probabilities to future events when given the right incentives and good feedback. To receive this feedback, the online community at Metaculus tracks how well they perform in their forecasts.
What does this group of forecasters expect for the future of AI?
At the time of writing, in November 2022, the forecasters believe that there is a 50/50-chance for an ‘Artificial General Intelligence’ to be ‘devised, tested, and publicly announced’ by the year 2040, less than 20 years from now.
On their page about this specific question, you can find the precise definition of the AI system in question, how the timeline of their forecasts has changed, and the arguments of individual forecasters for how they arrived at their predictions.
The timelines of the Metaculus community have become much shorter recently. The expected timelines have shortened by about a decade in the spring of 2022, when several impressive AI breakthroughs happened faster than many had anticipated.
The forecast by Ajeya Cotra
The last shown forecast stems from the research by Ajeya Cotra, who works for the nonprofit Open Philanthropy. In 2020 she published a detailed and influential study asking when the world will see transformative AI. Her timeline is not based on surveys, but on the study of long-term trends in the computation used to train AI systems. I present and discuss the long-run trends in training computation in this companion article.
Cotra estimated that there is a 50% chance that a transformative AI system will become possible and affordable by the year 2050. This is her central estimate in her “median scenario.” Cotra emphasizes that there are substantial uncertainties around this median scenario, and also explored two other, more extreme, scenarios. The timelines for these two scenarios – her “most aggressive plausible” scenario and her “most conservative plausible” scenario – are also shown in the visualization. The span from 2040 to 2090 in Cotra’s “plausible” forecasts highlights that she believes that the uncertainty is large.
The visualization also shows that Cotra updated her forecast two years after its initial publication. In 2022 Cotra published an update in which she shortened her median timeline by a full ten years.
It is important to note that the definitions of the AI systems in question differ very much across these various studies. For example, the system that Cotra speaks about would have a much more transformative impact on the world than the system that the Metaculus forecasters focus on. More details can be found in the appendix and within the respective studies.
What can we learn from the forecasts?
The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra.
There are two big takeaways from these forecasts on AI timelines:
1. There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months.
There is not just disagreement between experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare.
2. At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime.
The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote.
We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes.
Subscribe to World Economic Forum at your own risk
© 2023 FM Media Enterprises, Ltd.