For various “percentages of the way” done AI research could be, in terms of percentage of the necessary insights discovered, what is the probability that AI research is not yet that percentage done?
P(no more than this much of the way done)
Proportion of required insights that have been discovered
Instead of drawing a cumulative distribution function, you can instead use a pre-set prior. These priors are based on the Pareto distribution. To make choice of the parameter more intuitive, we parameterize the distribution in terms of a probability q, equal to the probability that a doubling in number of insights (starting from the minimum number of insights) would result in a sufficient set of insights.
Minimum plausible number of insights required:
Assuming a linear increase in number of required insights over time, the following cumulative distribution function for time when all required insights are discovered is implied by these beliefs.Adjust the maximum year displayed:
How was this data generated? Jessica Taylor, Jack Gallagher, and Baeo Maltinsky spent a few hours generating a list of AI insights that seemed around the same order of significance or more significant than the insight of LSTM (specifically, the insight of inventing LSTM given that RNNs were already invented). The following is a plot of number of AI insights in our list over time since 1850.
The model assumes that insights increase linearly over time. The increase has been roughly linear since 1945, but this could change due to low hanging fruit, expanding research avenues, changes in the number and effectiveness of research institutions, and so on. The model does not distinguish between insights in our list (which we selected according to some subjective estimation of importance) and specifically required insights; however, if the percentage of insights that are actually required stays somewhat constant over time, this does not significantly affect the timeline.
The list of insights and their years can be found in this document.