Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Cognitive Science

Five ways the superintelligence revolution might happen

by The Conversation
September 29, 2014
in Cognitive Science
Photo credit: DARPA

Photo credit: DARPA

Share on TwitterShare on Facebook

By Nick Bostrom, University of Oxford

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. The only reasons this may not occur is if we develop some other dangerous technology first that destroys us, or otherwise fall victim to some existential risk.

But assuming that scientific and technological progress continues, human-level machine intelligence is very likely to be developed. And shortly thereafter, superintelligence.

Predicting how long it will take to develop such intelligent machines is difficult. Contrary to what some reviewers of my book seem to believe, I don’t have any strong opinion about that matter. (It is as though the only two possible views somebody might hold about the future of artificial intelligence are “machines are stupid and will never live up to the hype!” and “machines are much further advanced than you imagined and true AI is just around the corner!”).

A survey of leading researchers in AI suggests that there is a 50% probability that human-level machine intelligence will have been attained by 2050 (defined here as “one that can carry out most human professions at least as well as a typical human”). This doesn’t seem entirely crazy. But one should place a lot of uncertainty on both sides of this: it could happen much sooner or very much later.

Exactly how we will get there is also still shrouded in mystery. There are several paths of development that should get there eventually, but we don’t know which of them will get there first.

Biological inspiration

We do have an actual example of generally intelligent system – the human brain – and one obvious idea is to proceed by trying to work out how this system does the trick. A full understanding of the brain is a very long way off, but it might be possible to glean enough of the basic computational principles that the brain uses to enable programmers to adapt them for use in computers without undue worry about getting all the messy biological details right.

We already know a few things about the working of the human brain: it is a neural network, it learns through reinforcement learning, it has a hierarchical structure to deal with perceptions and so forth. Perhaps there are a few more basic principles that we still need to discover – and that would then enable somebody to clobber together some form of “neuromorphic AI”: one with elements cribbed from biology but implemented in a way that is not fully biologically realistic.

Google News Preferences Add PsyPost to your preferred sources

Pure mathematics

Another path is the more mathematical “top-down” approach, which makes little or no use of insights from biology and instead tries to work things out from first principles. This would be a more desirable development path than neuromorphic AI, because it would be more likely to force the programmers to understand what they are doing at a deep level – just as doing an exam by working out the answers yourself is likely to require more understanding than doing an exam by copying one of your classmates’ work.

In general, we want the developers of the first human-level machine intelligence, or the first seed AI that will grow up to be superintelligence, to know what they are doing. We would like to be able to prove mathematical theorems about the system and how it will behave as it rises through the ranks of intelligence.

Brute Force

One could also imagine paths that rely more on brute computational force, such by as making extensive use of genetic algorithms. Such a development path is undesirable for the same reason that the path of neuromorphic AI is undesirable – because it could more easily succeed with a less than full understanding of what is being built. Having massive amounts of hardware could, to a certain extent, substitute for having deep mathematical insight.

We already know of code that would, given sufficiently ridiculous amounts of computing power, instantiate a superintelligent agent. The AIXI model is an example. As best we can tell, it would destroy the world. Thankfully, the required amounts of computer power are physically impossible.

Plagiarising nature

The path of whole brain emulation, finally, would proceed by literally making a digital copy of a particular human mind. The idea would be to freeze or vitrify a brain, chop it into thin slices and feed those slices through an array of microscopes. Automated image recognition software would then extract the map of the neural connections of the original brain. This 3D map would be combined with neurocomputational models of the functionality of the various neuron types constituting the neuropil, and the whole computational structure would be run on some sufficiently capacious supercomputer. This approach would require very sophisticated technologies, but no new deep theoretical breakthrough.

In principle, one could imagine a sufficiently high-fidelity emulation process that the resulting digital mind would retain all the beliefs, desires, and personality of the uploaded individual. But I think it is likely that before the technology reached that level of perfection, it would enable a cruder form of emulation that would yield a distorted human-ish mind. And before efforts to achieve whole brain emulation would achieve even that degree of success, they would probably spill over into neuromorphic AI.

Competent humans first, please

Perhaps the most attractive path to machine superintelligence would be an indirect one, on which we would first enhance humanity’s own biological cognition. This could be achieved through, say, genetic engineering along with institutional innovations to improve our collective intelligence and wisdom.

It is not that this would somehow enable us “to keep up with the machines” – the ultimate limits of information processing in machine substrate far exceed those of a biological cortex however far enhanced. The contrary is instead the case: human cognitive enhancement would hasten the day when machines overtake us, since smarter humans would make more rapid progress in computer science. However, it would seem on balance beneficial if the transition to the machine intelligence era were engineered and overseen by a more competent breed of human, even if that would result in the transition happening somewhat earlier than otherwise.

Meanwhile, we can make the most of the time available, be it long or short, by getting to work on the control problem, the problem of how to ensure that superintelligent agents would be safe and beneficial. This would be a suitable occupation for some of our generation’s best mathematical talent.


The Conversation organised a public question-and-answer session on Reddit in which Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, talked about developing artificial intelligence and related topics.

The Conversation

Nick Bostrom is the director of the Future of Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology, both based in the Oxford Martin School. He is the author of Superintelligence: Paths, Dangers, Strategies.

This article was originally published on The Conversation.
Read the original article.

Previous Post

Why some kids can’t spell and why spelling tests won’t help

Next Post

Positives in negative results: When finding ‘nothing’ means something

RELATED

Psychologists implant false beliefs to understand how human memory fails
Memory

Psychologists implant false beliefs to understand how human memory fails

March 14, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Cognitive Science

The hidden brain benefit of getting in shape that scientists just discovered

March 11, 2026
Scientists use “dream engineering” to boost creative problem-solving during REM sleep
Cognitive Science

Genetic factors drive the link between cognitive ability and socioeconomic status

March 10, 2026
Scientists use “dream engineering” to boost creative problem-solving during REM sleep
Cognitive Science

Everyday mental quirks like déjà vu might be natural byproducts of a resting mind

March 10, 2026
Scientists use “dream engineering” to boost creative problem-solving during REM sleep
Cognitive Science

Scientists use “dream engineering” to boost creative problem-solving during REM sleep

March 10, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Artificial Intelligence

Brain-controlled assistive robots work best when they share the workload with users

March 8, 2026
How common is anal sex? Scientific facts about prevalence, pain, pleasure, and more
Cognitive Science

New psychology research reveals that wisdom acts as a moral compass for creative thinking

March 6, 2026
Hemp-derived cannabigerol shows promise in reducing anxiety — and maybe even improving memory
Alcohol

Using cannabis to cut back on alcohol? Your working memory might dictate if it works

March 5, 2026

STAY CONNECTED

LATEST

Laughter plays a unique role in building a secure father-child relationship, new research suggests

Scientists just discovered that a high-fat diet can cause gut bacteria to enter the brain

Psychologists implant false beliefs to understand how human memory fails

Terry Pratchett’s novels held clues to his dementia a decade before diagnosis, new study suggests

Women who are open to “sugar arrangements” tend to show deeper psychological vulnerabilities

Ashwagandha shows promise as a treatment for depression in new rat study

Early exposure to a high-fat diet alters how the adult brain reacts to junk food

How sexual orientation stereotypes keep men out of early childhood education

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc