Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Cognitive Science

Five ways the superintelligence revolution might happen

by The Conversation
September 29, 2014
in Cognitive Science
Photo credit: DARPA

Photo credit: DARPA

Share on TwitterShare on Facebook

By Nick Bostrom, University of Oxford

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. The only reasons this may not occur is if we develop some other dangerous technology first that destroys us, or otherwise fall victim to some existential risk.

But assuming that scientific and technological progress continues, human-level machine intelligence is very likely to be developed. And shortly thereafter, superintelligence.

Predicting how long it will take to develop such intelligent machines is difficult. Contrary to what some reviewers of my book seem to believe, I don’t have any strong opinion about that matter. (It is as though the only two possible views somebody might hold about the future of artificial intelligence are “machines are stupid and will never live up to the hype!” and “machines are much further advanced than you imagined and true AI is just around the corner!”).

A survey of leading researchers in AI suggests that there is a 50% probability that human-level machine intelligence will have been attained by 2050 (defined here as “one that can carry out most human professions at least as well as a typical human”). This doesn’t seem entirely crazy. But one should place a lot of uncertainty on both sides of this: it could happen much sooner or very much later.

Exactly how we will get there is also still shrouded in mystery. There are several paths of development that should get there eventually, but we don’t know which of them will get there first.

Biological inspiration

We do have an actual example of generally intelligent system – the human brain – and one obvious idea is to proceed by trying to work out how this system does the trick. A full understanding of the brain is a very long way off, but it might be possible to glean enough of the basic computational principles that the brain uses to enable programmers to adapt them for use in computers without undue worry about getting all the messy biological details right.

We already know a few things about the working of the human brain: it is a neural network, it learns through reinforcement learning, it has a hierarchical structure to deal with perceptions and so forth. Perhaps there are a few more basic principles that we still need to discover – and that would then enable somebody to clobber together some form of “neuromorphic AI”: one with elements cribbed from biology but implemented in a way that is not fully biologically realistic.

Google News Preferences Add PsyPost to your preferred sources

Pure mathematics

Another path is the more mathematical “top-down” approach, which makes little or no use of insights from biology and instead tries to work things out from first principles. This would be a more desirable development path than neuromorphic AI, because it would be more likely to force the programmers to understand what they are doing at a deep level – just as doing an exam by working out the answers yourself is likely to require more understanding than doing an exam by copying one of your classmates’ work.

In general, we want the developers of the first human-level machine intelligence, or the first seed AI that will grow up to be superintelligence, to know what they are doing. We would like to be able to prove mathematical theorems about the system and how it will behave as it rises through the ranks of intelligence.

Brute Force

One could also imagine paths that rely more on brute computational force, such by as making extensive use of genetic algorithms. Such a development path is undesirable for the same reason that the path of neuromorphic AI is undesirable – because it could more easily succeed with a less than full understanding of what is being built. Having massive amounts of hardware could, to a certain extent, substitute for having deep mathematical insight.

We already know of code that would, given sufficiently ridiculous amounts of computing power, instantiate a superintelligent agent. The AIXI model is an example. As best we can tell, it would destroy the world. Thankfully, the required amounts of computer power are physically impossible.

Plagiarising nature

The path of whole brain emulation, finally, would proceed by literally making a digital copy of a particular human mind. The idea would be to freeze or vitrify a brain, chop it into thin slices and feed those slices through an array of microscopes. Automated image recognition software would then extract the map of the neural connections of the original brain. This 3D map would be combined with neurocomputational models of the functionality of the various neuron types constituting the neuropil, and the whole computational structure would be run on some sufficiently capacious supercomputer. This approach would require very sophisticated technologies, but no new deep theoretical breakthrough.

In principle, one could imagine a sufficiently high-fidelity emulation process that the resulting digital mind would retain all the beliefs, desires, and personality of the uploaded individual. But I think it is likely that before the technology reached that level of perfection, it would enable a cruder form of emulation that would yield a distorted human-ish mind. And before efforts to achieve whole brain emulation would achieve even that degree of success, they would probably spill over into neuromorphic AI.

Competent humans first, please

Perhaps the most attractive path to machine superintelligence would be an indirect one, on which we would first enhance humanity’s own biological cognition. This could be achieved through, say, genetic engineering along with institutional innovations to improve our collective intelligence and wisdom.

It is not that this would somehow enable us “to keep up with the machines” – the ultimate limits of information processing in machine substrate far exceed those of a biological cortex however far enhanced. The contrary is instead the case: human cognitive enhancement would hasten the day when machines overtake us, since smarter humans would make more rapid progress in computer science. However, it would seem on balance beneficial if the transition to the machine intelligence era were engineered and overseen by a more competent breed of human, even if that would result in the transition happening somewhat earlier than otherwise.

Meanwhile, we can make the most of the time available, be it long or short, by getting to work on the control problem, the problem of how to ensure that superintelligent agents would be safe and beneficial. This would be a suitable occupation for some of our generation’s best mathematical talent.


The Conversation organised a public question-and-answer session on Reddit in which Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, talked about developing artificial intelligence and related topics.

The Conversation

Nick Bostrom is the director of the Future of Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology, both based in the Oxford Martin School. He is the author of Superintelligence: Paths, Dangers, Strategies.

This article was originally published on The Conversation.
Read the original article.

Previous Post

Why some kids can’t spell and why spelling tests won’t help

Next Post

Positives in negative results: When finding ‘nothing’ means something

RELATED

Cognitive Science

Intelligent people are better judges of the intelligence of others

April 6, 2026
A surprising body part might provide key insights into schizophrenia risk
Cognitive Science

Brain scans reveal how a woman voluntarily enters a psychedelic-like trance without drugs

April 4, 2026
Schemas help older adults compensate for age-related memory decline, study finds
Cognitive Science

Your body exhibits subtle physiological changes when you engage in self-deception

April 3, 2026
Psychotic delusions are evolving to incorporate smartphones and social media algorithms
Cognitive Science

Brain scans shed light on how short videos impair memory and alter neural pathways

April 3, 2026
Cannabis intoxication broadly impairs multiple memory types, new study shows
Cannabis

Cannabis intoxication broadly impairs multiple memory types, new study shows

April 3, 2026
ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests
Artificial Intelligence

ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests

March 30, 2026
Verbal IQ predicts political participation and liberal attitudes twice as strongly as performance IQ
Cognitive Science

Trying harder on an intelligence test does not actually improve your score

March 27, 2026
Brain rot and the crisis of deep thought in the age of social media
Cognitive Science

Massive analysis of longitudinal data links social media to poorer youth mental health

March 27, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Free gifts with no strings attached can boost customer spending by over 30%, study finds
  • New research reveals the “Goldilocks” age for social media influencers
  • What today’s shoppers really want from salespeople, and what drives them away
  • The salesperson who competes against themselves may outperform the one trying to beat everyone else
  • When sales managers serve first, salespeople stay longer and sell more confidently

LATEST

Social media analysis links polarized political language to distorted thought patterns

Genetic study unravels the link between caffeine intake and sleep timing

Hikikomori: Can psychological resilience prevent extreme social withdrawal?

Can a sweet potato help your baby sleep through the night?

Anxious young adults are more likely to develop digital addictions

How stimulating the vagus nerve could protect the brain from Alzheimer’s disease

Intelligent people are better judges of the intelligence of others

People consistently devalue creative writing generated by artificial intelligence

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc