Log in

No account? Create an account
entries friends calendar profile Previous Previous Next Next
What I believe about the Singularity - Matt Bamberger's journal
Singularity now!
What I believe about the Singularity

1 The Singularity will occur within the next few decades.

1.1 We will soon develop human-equivalent AI, either by brute-force simulation of the human brain, or by a more traditional engineered approach.

By human-equivalent AI, I mean an AI with cognitive abilities at least equal to those of a human being. It isn't necessary for an AI to be exactly (or even moderately) like a human being. I agree with Eliezer Yudkowsky and others who have argued that a human-like AI would be profoundly dangerous.

Human-like AIs are dangerous for two reasons. Firstly, they will tend to exhibit dangerous human traits such as selfishness and fear as well as benign ones. Secondly, the inner workings of a human-like AI will probably be relatively opaque, just as the workings of an actual human brain are opaque. This makes it much harder to monitor and evaluate an AI, both to prevent it from exhibiting malicious behavior and to detect any serious malfunctions such as the development of aberrant goals.

From here on, when I refer to an AI, I mean a human-equivalent (but not necessarily human-like) AI.

1.1.1 Within the next few decades, we will be able to develop a brute-force AI.

A brute-force simulation of the brain works by simulating every relevant aspect of the brain at an appropriately high level of fidelity. This is the most straightforward and most plausible approach to developing an AI . The downside of the brute-force approach is that it produces an AI whose design is likely to be hard to understand and control. Consequently it's likely to be both less powerful and more dangerous than an engineered AI. Creating a brute-force AI is currently impossible, both because our understanding of the required neurobiology is insufficient, and because the requisite computer hardware isn't yet available. Over the next few decades, however, advances in both neuroscience and computer hardware will make such an AI possible. Within the next few decades, our understanding of the workings of the brain will develop sufficiently to enable us to develop a brute-force AI.

The most likely basis for this kind of AI is a neuron-level simulation of the brain. This would simulate individual neurons at a fairly detailed level, but would not simulate the internal biochemistry of individual neurons. Making a simulation like this work would require a detailed understanding of how different types of neurons work, and of how all the neurons in the brain are connected. Although we have a long way to go on both fronts, we already have a detailed understanding of the problems involved, and are able to build simulations of significant neural systems whose behavior closely mimics that of their biological equivalents. Computers powerful enough to run a brute-force AI will be available by 2011, and ubiquitous by 2033.

As discussed elsewhere, the raw computational capacity of the human brain (and hence the amount of computer power needed to simulate it) is probably approximately 20 petaflops. A number of current computer systems have at least .01 that much power, and work is already underway on systems that will have approximately that much power. Although predicting the future is always hard, it is probable that Moore's Law will hold up for at least another decade, bringing brain-level computational capacity within the reach of any large company. In addition, it's likely that Moore's Law will hold up far beyond this point.

Kurzweil and others have made a strong case that the current rate of improvement in computer technology will continue for several more decades, in which case a brain-equivalent computer should be available for $1,000 by roughly 2033.

Note that I'm using the term Moore's Law in the common but technically incorrect sense of referring to any exponential increase in computational capacity. An optimized brute-force AI will likely require much less raw power than the human brain. Suitable computers are available now, and will be ubiquitous by 2023.

There are a number of theoretical and experimental reasons for believing that an engineered AI could achieve human-level results using much less total computational power than the brain uses. Among these:

- The 20 petaflop figure for the brain assumes that all neurons are working at maximum capacity at all times. We know this isn't the case (in part because of energy consumption calculations).

- The design of the brain is subject to a number of constraints due to the nature of its physical substrate (for example, because the firing time of individual neurons is so slow, the brain is precluded from using significant serialization of computation). These constraints won't apply to engineered AIs.

- A number of different teams have simulated significant neural modules, and have often found that they are able to replicate the functionality of those modules using considerably less computational power than the systems they simulate. Kurzweil cites a factor of 1,000 efficiency improvement, which corresponds to 10 doublings of computer capacity, or a 10 year acceleration of the requisite technology (assuming a 1 year doubling cycle).

1.1.2 More traditional engineered approaches to AI are likely to succeed within the next few decades.

AI research has recently begun to emerge from the long AI winter. Major recent triumphs include the development of useful machine translation, useful general-purpose fluid speech recognition, and the successful completion of the DARPA grand challenge for self-driven vehicles. For the first time in some years, public perception of AI is improving, and it seems likely that funding for AI work will increase also.

One interesting way of estimating the difficulty of creating an AI is to look at the complexity of the human genome, which serves as an existence proof of the feasibility of AI. It turns out that the portion of the human genome associated with intelligence is large, but probably not larger than large existing software projects. This suggests that at a minimum, an engineered AI is within the reach of a large commercial software team.

Software is clearly the hardest part of developing an AI from scratch. It would be foolish to claim that success in this area is certain, but I believe that it's likely within the next few decades

1.2 The development of a human-level AI will be followed very quickly by the development of super-human AI.

Creating the first AI will be very hard. Once that has been achieved, however, a runaway cycle of improvement will commence, rapidly culminating in a vastly superhuman AI.

1.2.1 The capacity of an AI will grow exponentially along with the exponential growth of the underlying computer technology.

The speed of an AI will scale with the speed of its underlying hardware. Therefore, the AI will rapidly progress from being human-equivalent to being superhuman simply by virtue of Moore's Law.

1.2.2 In addition, there are numerous ways that a human-level AI can be extended to become super-human.

There are certain properties of digital computers that can greatly extend the power of a nominally human-equivalent AI. Although some of there may not be practical, depending on the type of AI, many of them probably will be.

- AIs can be cloned. An AI with a particular set of knowledge or skills can readily be cloned as many times as desired. Imagine if every patient could see the most capable medical specialist in the world, or if every engineer on a complex project could be a clone of the most capable, knowledgeable member of the team.

- AIs can share information much more efficiently than humans can. Humans can only share abstract knowledge, and only through the very slow and inefficient medium of language. We spend the first 20 years of our lives (and a significant portion of the remainder) simply relearning things that other people already know. AIs will be able to instantly share knowledge amongst themselves (not merely at the abstract level of textbook knowledge, but at the profound level of muscle memory or deep understanding).

- The two previous points make a tremendously powerful combination. Imagine if at the beginning of each workday, you could create 10 clones of yourself, each of which would handle a particular task. Imagine that at the end of the day, you could reabsorb all relevant memories and new skills from each.

- AIs will have vastly super-human memory capacity, and vastly better ability to access external knowledge bases.

- AIs won't need to sleep. They won't get sick. They won't grow old. They won't get bored or distracted.

1.2.3 A superhuman AI will be able to bootstrap its own improvement by redesigning its own software and hardware.

An obvious early task for an AI is to redesign its own software and/or hardware. As AIs become super-human, they will obviously become better (and faster) at designing AIs than humans are.

1.3 The development of super-human AI will trigger the Singularity.

Once a super-human AI begins to improve itself, an accelerating cycle will begin which will culminate in an AI with the maximum theoretically attainable lead of intelligence. At that point, it will make all possible scientific discoveries, and develop all possible technologies.

It's unclear how rapidly this cycle will occur. I believe it will probably take between a few hours and a few years.

The culmination of this cycle (or perhaps some point along it) constitutes the Singularity. This is the point at which the world has changed so dramatically and quickly that from this side of the event, we are unable to make meaningful predictions about what lies on the other side.

2 The time leading up to the Singularity is one of grave peril.

2.1 We face a significant present threat from bio-engineered pathogens. This threat will grow rapidly.

Many people have pointed out that biotechnology is quickly becoming simply another branch of information technology. This means that we are rapidly approaching the point where we will be able to manipulate biological systems as easily and completely as we currently manipulate computer systems.

Obviously, this technology has immense benefits. However, it also poses immense dangers. Just as computers face serious dangers from both human error and malice, so will the biological world. Researchers have already demonstrated the ability to build dangerous pathogens such as polio from scratch. Complete genomes exist for many others, including smallpox and H5N1. Although constructing many of those pathogens is currently beyond our reach, it will soon be feasible, first for large institutions, then for well-equipped laboratories, and finally for any interested hobbyist. Not far beyond that is the ability to create new, custom engineered pathogens whose virulence and contagiousness will far exceed those of any natural organism.

Although our ability to defend against these threats is also growing rapidly, it's far from clear that it will keep pace with the danger that we face.

2.2 Further in the future, we will face a severe threat from nanotechnology.

The peril of "gray goo" has been thoroughly explored elsewhere. Suffice it to say that self-replicating nanotechnology carries far greater promise and far greater menace than even the most advanced biotechnology.

Nanotechnology currently lags considerably behind biotechnology (perhaps by one to three decades), and is significantly more speculative. My gut instinct is that we are likely to reach the Singularity before nanotechnology becomes a major factor.

2.3 Finally, the transition to the Singularity is fraught with peril.

There are a number of ways that this transition could go badly either by mistake or as a result of deliberate malice. Here are a few:

- We could develop an AI that was hostile to humanity.

- We could develop an AI that wasn't explicitly hostile to humanity, but which nonetheless acted against our interests out of indifference or malfunction.

- The transition itself might unleash any number of technological disasters, including engineered pathogens, gray goo, and other as yet unforeseen hazards.

- The transition might expose us to certain "philosophical" risks. For example, we might discover that life is terminally boring to us when we are super-intelligent.

3 The future of humanity (and perhaps of all sentient life) will be determined by the outcome of the Singularity.

3.1 It's impossible to see clearly beyond the Singularity.

It's hard to overstate the magnitude of the changes that the Singularity will bring about. Just as our ape ancestors were simply incapable of understanding the changes that Homo sapiens would bring about, so are we incapable of understanding what the post-Singularity world will be like.

Arthur Clarke famously observed that any sufficiently advanced technology is indistinguishable from magic. To my mind, that fails to capture the full impact of the Singularity. Not only will we possess all the abilities commonly associated with magic, but we ourselves will change in ways that I don't think we are currently capable of comprehending.

3.2 The initial Singularity breakthrough will happen very quickly, and will be controlled by a small group of people. It will make them omnipotent in the full theological sense of the word.

Historically, major technological changes have come about slowly (over a time period ranging from millennia to decades), and have been controlled by large groups (ranging from whole civilizations to city-states). However, the nature of the Singularity is such that it is likely to occur very quickly, and to be controlled by a very small group.

Even if a number of groups are developing AIs that are close to the Singularity threshold, it is inevitable that one AI will cross the threshold of self-improvement first. At that point, it will advance so quickly and dramatically that it will become vastly more powerful than all the others.

This means that whoever first develops Singularity technology will gain immense powers which will effectively allow them to control human destiny (since the power to completely control human society implies the ability to prevent other people from developing similar technology).

This is obviously profoundly problematic if Singularity technology is first developed by someone whose motives and/or wisdom are less than optimal. The nightmare scenario, of course, is that Singularity technology is used for pure evil: the result could easily be an eternity of literal hell. One can also imagine less catastrophic, but nonetheless profoundly sub-optimal scenarios of a more banal variety. Imagine, for example, that Singularity technology was first developed by a basically well-meaning but profoundly egotistical individual. An eternity of benevolent dictatorship could easily be the result.

A number of people have suggested that super-human AIs will find humanity fundamentally uninteresting, and will quietly go on their way, leaving us to go about our lives unchanged. I find this outcome unlikely but irrelevant. If that should somehow occur, it won't be long until another AI is developed (and another, and another,... ) until an AI comes along that for better or worse does transform our lives.

4 Our actions now can make the difference between extinction, a world of near-infinite evil, and a world of near-infinite good. Conscience demands that we act thoughtfully, quickly, and decisively.

Tags: ,

From: pkijsteix Date: February 17th, 2013 06:04 pm (UTC) (Link)
Meet your perfect lover and Be Naughty today! Go Here dld.bz/chwZP