Median Group

Revisiting the Insights model

Note: This demo is in beta, and you may experience issues such as strange numerical behavior at this time.

Last year, we released our insights-based model that generated a projected timeline using historical data and a prior distribution. We’ve revisited it to address its limitations and improve the data it draws from.

The model relies on the assumption that progress in AI relies on accumulating insights, fundamental advances in our understanding, that allow for improvements in capacity without increase in resources expended. This choice makes an attempt to separate out the effects of true technological advancement from the effects of an increase in computing power devoted to a problem, both of which can increase the capacity of machine intelligence to solve complex problems. Computational power is an expensive, finite resource, and without a paradigm-shifting improvement in computing itself, precise allocation of that power alone will not be enough to continue advancing AI’s problem-solving capabilities.

The interactive model below provides two methods of capturing a prior about how many more advances in understanding are required to achieve human-level machine intelligence. Based on that prior, and on the pace of insight discovery during a particular historical period, we compute a probability distribution over time of the likelihood humans will develop human-level AI. Results of this calculation are shown in the “Implied timeline” graph below.

Step 1: Specify a prior for current progress

Option A: Draw a distribution

For various “percentages of the way” done AI research could be, in terms of percentage of the necessary insights discovered, what is the probability that AI research is not yet that percentage done?

The graph below allows you to draw a distribution of how likely it is we have achieved a particular portion of the insights required for human-level machine intelligence.

Option B: Pre-set priors from Pareto distribution

Instead of drawing a cumulative distribution function, you can instead use a pre-set prior based on a Pareto distribution.

To make the choice of Pareto distribution more intuitive, we parameterize the distribution in terms of a probability q, equal to the probability that a doubling in number of insights (starting from the minimum number of insights) would result in a sufficient set of insights. q can be set directly, or we can sample from a mixture of Pareto distirbutions, where the q parameters are sampled from a uniform distribution or a beta distribution.

Set q directly


Sample q uniformly over (0,1)

Sample q with Beta(α, β)

:
:

Note: The simulator can be very slow for larger values of q, as most of the samples need to be thrown away.

Step 2: Specify pace of progress

Which period in history is most representative of the future pace of AI insight discovery?

The graph below plots the aggregate of insights discovered over time and allows selection of a particular period of history in AI research. The curve fit to that period (linear, exponential, or sigmoidal) is used to project the future distribution of discoveries.


Result: Implied timeline

Sources

The data used in this model is available as a JSON file. The source code for the demo can be found on the Median Group github.