Median Group

Ongoing Research

Normativity and Antinormativity Studies

There's a lot of social "dark matter" where institutions and people perform badly at important tasks, all at once, where they've demonstrated the ability to do better in other contexts. We have an explanation for this "dark matter": vice-signaling, or antinormativity. Specifically, people expect to be taken care of if and only if they seem like they have something to hide, and would like to help others with something to hide.

You can think of this as a corollary to Alex Tabarrok's principle that a bet is a tax on bullshit: the opposition to the resolution of bets is a subsidy for antinormative behavior.

Some work we've published on this topic:

Other related research topics include the theory and phenomenology of zero-sum games as played by humans, including both theoretical models and actual examples of tournaments, antitournaments, scapegoating games, and pecking orders; see Truth-telling is aggression in zero-sum frames.

rTMS for Capacity Enhancement

The future of humanity depends on the decisions people make, and many high-leverage decisions are not being made rationally. Conventional wisdom in academic psychology is that much irrational decisionmaking is due to adverse life events (e.g. trauma) and therefore potentially ameliorable through psychiatric interventions.

Published research demonstrates that rTMS can produce large, lasting improvements in basic faculties such as cognitive empathy and executive function in healthy people, as well as its demonstrated medical effectiveness in treating depression and anxiety. (See Sarah Constantin's lit review. See also Switched On) and Savant for a Day for detailed accounts of potential performance enhancements.) There is also published research suggesting rTMS can cause people to behave more or less microeconomically rational in lab experiments such as the ultimatum game.

We have obtained an rTMS machine and have begun exploratory research, starting with small-n self-experimentation to reproduce existing published findings and understand them qualitatively, and eventually identifying interventions which can predictably improve decision-making and prosociality and publishing the evidence for them. So far, we have succeeded in operating the machine despite initial difficulties with logging into it (given we bought the machine secondhand and login info was not provided), calibrated the machine by stimulating multiple people's motor cortexes in a way that causes visible thumb twitches, and tried out standard treatments.

The interventions we are most excited about investigating are:

  • Stimulating the dorsolateral prefrontal cortex, involved in planning and executive function. This is commonly done to treat depression, and might help alleviate learned helplessness among high-potential people in positions of influence.

  • Stimulating the dorsomedial prefrontal cortex, involved in empathy and specifically in trying to understand someone else's perspective. This seems like it's not only a general capacity enhancement, but a way to correct trauma-induced misalignment between values and attention allocation.

  • Experimental studies show that suppressing activity in areas of the cortex involved in language processing seems to make people confabulate memories less, and do better at logic problems and reading comprehension. It seems like the processes being suppressed actively interfere with discourse, so suppressing them might enable better research discussions and negotiations.

ML for Detecting Conversational Derailing

As another ongoing project, we seek to use machine learning to model aspects of how people are frequently triggered into defensive speech patterns which in turn trigger the same patterns in others. For some of our early thought on this topic, see The Engineer and the Diplomat. We believe that a machine learning system could detect patterns of information blockage and tempo control, because we observe people reliably responding to these patterns in predictable ways, indicating that fast pattern recognition is sufficient for detection.

Such a machine learning system could point out defensive patterns present in conversations that are likely to trigger similar behavior in others. While it is possible for people to point out defensive patterns in others, it is also possible to feign this as a social attack; having defensive patterns pointed out by a machine that doesn't have a social agenda allows the information to be literally interpreted. We predict that use of such a machine learning system in our own conversations would substantially increase the rate at which we make intellectual progress.

As a start, we have created a tool to tag a conversation audio or transcript with entropy scores, to detect text that is easy to predict with a GPT-like system; predictable text indicates that the speaker is introducing relatively little information to the conversation. Here is an example output of per-word entropy scores from a transcript of a conversation between Elon Musk and Joe Rogan, computed using our tool: Elon Musk / Joe Rogan transcript entropy

Technical Review Board

Here is a way to, among other things, solicit and review additional project ideas: We plan to set up a technical review board. This will be a set of reviewers and a website that allows submission of evaluable ideas, including both original and unoriginal ideas that the submitter thinks are neglected (the format of submissions would include journal-like articles, PowerPoint-like presentations, blog posts, and so on, as long as the submission contains enough information to evaluate the idea). The reviewers would have a recorded conversation (over text or voice) with the submitter to work out the details of what the idea is, what its relevance is, and whether it's valid. Submissions that are rated as sufficiently valid and relevant will be published, along with most of the content of the evaluation conversations, on the website (and perhaps in paper magazine-like publications). The website would also enable members of the public to comment on the submissions.

The main alternative existing mechanisms are blog posts and academic peer review. Blog posts are by-default not collected together (unless posted to a forum such as LessWrong); if posted, it may or may not be reviewed, and it is rare for the author of a post to pay anyone to review it. Due to the haphazard nature of blog post comments, there is a lack of a common standard by which blog posts and their evaluations may be judged.

Academic peer review is a heavyweight process that typically selects for insiders who are familiar with a highly specialized discourse. For example, MIRI generally had a lot of difficulty getting its papers published in academic journals (including papers that were evaluated as very high-quality by non-reviewer academics, such as Logical Induction). It is difficult for the submitter to have novel work published if they have not talked to anyone involved in running the conference or heavily familiarized themselves with the content of the relevant journal. This is partially due to the highly specialized nature of typical academic journals, and partially due to artificial barriers to entry.

Compared to academic peer review, the technical review board would be much more lightweight. Submissions would not have to be original ideas, and would not have to be formatted in a specific way. The review would be in the form of a back-and-forth conversation instead of a one-sided discourse in which the reviewer has to read and understand an entire paper (perhaps containing complicated mathematics) with no helpful explanations not contained in the paper itself. The reviewer would even have the option of partially reviewing the paper, e.g. reviewing a few sections and then concluding that they lack the expertise and/or willingness to complete the review, perhaps leaving the rest of the review until a future issue. There would initially be no designated "conference" corresponding to an issue of the publication that submitters would be required to attend.

In our work we have regularly come across technical ideas and insights of interest, which we would like to be legibly evaluated, that are not good fits for established academic specialties, either because they do not fit within disciplinary boundaries, or are not fashionable enough to attract attention within the most relevant field. Since evaluation of these ideas is deeply neglected, we think that we can improve significantly on the status quo here. For a recent example of this sort of thing, see Robin Hanson's review of an interesting physics paper on dark matter.

Initially, the reviewers would be Median employees/volunteers or people known by Median employees/volunteers to have relevant expertise and to be trustworthy to evaluate ideas in good faith. Over time, the review process would allow others to build a reputation with the process even if they didn't start with one.

The main constraint on the topic of the submissions would be what this existing network is credibly able to evaluable. The evaluable topics overlap with fields including computer science, mathematics, decentralized algorithms (including cryptocurrency), machine learning, AI alignment, rational choice theory, analytic philosophy, physics, engineering, manufacturing processes, and economics. Submissions in areas that are more difficult to evaluate in any standardized way, such as in the humanities, would be considered, but would at least initially be dispreferred all else being equal; their review scores would more reflect the idiosyncratic opinions of the reviewers and overall be harder to predict. Hopefully, as time goes on, the review process could gain more information about factors relevant to evaluating harder-to-evaluate ideas.

Past Research

Modeling AI progress through insights

We assembled a list of major technical insights in the history of progress in AI and metadata on the discoverer(s) of each insight.

Based on this dataset, we developed an interactive model that calculates the time it would take to reach the cumulation of all AI research, based on a guess at what percentage of AI discoveries have been made.

AI Insights dataset: data (json file), schema

Feasibility of Training an AGI using Deep Reinforcement Learning: A Very Rough Estimate

Several months ago, we were presented with a scenario for how artificial general intelligence (AGI) may be achieved in the near future. We found the approach surprising, so we attempted to produce a rough model to investigate its feasibility. The document presents the model and its conclusions.

The usual cliches about the folly of trying to predict the future go without saying and this shouldn’t be treated as a rigorous estimate, but hopefully it can give a loose, rough sense of some of the relevant quantities involved. The notebook and the data used for it can be found in the Median Group numbers GitHub repo if the reader is interested in using different quantities or changing the structure of the model.

Download PDF

Early partial draft

Declining prices for solar energy capture and storage technologies herald the end of international petrochemical trade value within the next decade. The following brief period of transition, in which petrochemicals are commercially devalued but still strategically valuable for military purposes, will destabilize states whose revenues are dependent on petrochemical sales; and such destabilization may lead such states to pursue regional or global conflict to maintain their sovereignty and quality of life. This forthcoming paper examines the historical context, technological trends, and political incentives that will characterize this period, and what might be done to avert the most destructive outcomes.

Forecasting Forest Fires

Download PDF

A proposal for improved forest fire prediction.

Toward A Working Theory of Mind

Download PDF

An essay by Miya Perry that lays out an ontology for describing the mind, as well as a theory of how the mind develops over time and develops internal conflicts.