Log in

entries friends calendar profile Previous Previous
Matt Bamberger's journal
Singularity now!
Substrates part 1
Tags: ai
Thu, 10/25/2007

The term "substrate" comes up often in discussions about AI, especially in the context of comparing AI to biological intelligence.  The meaning of the term is pretty straightforward: the substrate is the hardware that embodies a particular intelligence.  Human intelligence, then, runs on a meat substrate: all the neurons that make up the human nervous system, along with their support infrastructure.  In traditional computing, we normally use the term "platform" to refer to the same concept.  For example, I'm writing this article in Word (the software), which is running on an Intel Core2 / Windows Vista computer (the platform, or substrate).

Deconstructing AI
Tags: ai
Mon, 09/10/2007

Imagine that you want to colonize Alpha Centauri. Your task is daunting, but it's conceptually fairly straightforward: you're gonna need propulsion systems, life support, radiation shielding, power sources, etc. Some of those individual problems may be very hard, but it's clear what they are, and in every case, there's a pretty obvious starting point. If you want to spent $10B on the project, you can do a reasonable job of hiring people, building teams, and getting started. You may or may not succeed, but you know how to get started.

AI is different, and therein lies one of the fundamental problems with developing AGI.

Pulling the trigger, part 2
Tags: ai singularity
Sat, 08/11/2007

In my original post on this topic, I opined that AI developers should in general regard each other as collaborators, not as competitors. I'd like to expand on that claim, and to demonstrate that even developers who are solely motivated by selfish considerations will find that the best way to achieve their goals is to collaborate with their competitors in a seemingly selfless manner.

For the sake of discussion, let's consider two possible approaches to the singularity:

The Human Security Report
Tags: transhumanism
Tue, 01/30/2007

The most important news item you've never read is the Human Security Report, published in 2005 by the Human Security Centre at the University of British Columbia, is IMHO one of the most important pieces of news to come out in the last decade, and also one of the least widely covered. Briefly, it's an in-depth look at the human impact of war, genocide, and political killings. The report's findings will probably come as a great surprise to anyone who's been relying on the mainstream (or, for that matter, the "alternative") media. Among other findings:

Read more...Collapse )

The Doomsday argument
Tags: singularity
Thu, 11/09/2006

Very briefly, the doomsday argument is a probabilistic argument that near-term human extinction is much more likely than is normally believed. Although the argument doesn't feel quite right to most people (including me), it's surprisingly hard to come up with a definitive counter-argument.

Nick Bostrom has a particularly good synopsis. Wikipedia has a more detailed but less concise explanation.

Flight patterns
Tue, 11/07/2006

A nifty video of US air traffic patterns.

Infovore in a hurry
Mon, 09/25/2006

If you're already getting your full RDA of data from RSS, you probably already know all of this. If you're not, it's time to wake up and smell the 21st century. RSS is a dramatically better way of getting all kinds of information, and has profoundly changed how I find out about the world. Not only does it give me access to all kinds of information that I couldn't realistically obtain otherwise, but it makes me a much more efficient information consumer. At this point, I find the idea of life without RSS pretty unbearable.

Here's why RSS is cool:

Read more...Collapse )

Brain simulation
Tags: ai
Sun, 04/30/2006

There are two basic approaches to building an AI. The traditional approach is to write an artificial intelligence from scratch. A less sophisticated but perhaps more tractable approach is to simply simulate the workings of an actual human brain. For a variety of reasons that I'll detail below, I don't think this is the best approach, but I think it's an excellent fallback. If traditional approaches fail to deliver a working AI in a timely fashion (which they very well may), there's an excellent chance that brain simulation will be able to deliver a good enough AI within the next few decades.

Read more...Collapse )

Pattern recognition, part 1
Tags: ai
Tue, 02/07/2006

I've been promising for a while now that I'd talk a bit about my recent Optical Character Recognition project. I spent a few months last fall working on a simple OCR project, and although I never finished it, I learned a great deal from it (which was the point of the whole exercise). I chose OCR for a number of reasons:

Read more...Collapse )

Five million PCs
Tags: ai
Thu, 01/19/2006

I recently made the pretty standard claim that the computational capacity of the human brain is equivalent to that of 5,000,000 modern PCs. There are probably many ways of building an AI that require a great deal less power than that. However, it's very likely that that level of power will be readily available within a few decades, and it's interesting to think about what kind of architecture you might come up with if you were determined to burn 5,000,000 PCs worth of processing power on a single AI.

Read more...Collapse )

Crocker's rules
Tags: transhumanism
Sun, 01/15/2006

"Never is etiquette and 'good form' observed more carefully than by experienced travellers when they find themselves in a tight place."
-Sir Ernest Shackleton

I recently stumbled across Crocker's Rules. You can find many versions of them on the net: this particular rendition is from DoWire.org:

Read more...Collapse )

My plans for 2006
Tags: personal
Sat, 12/31/2005

This being the end of 2005, it's a good time to review a couple of my 2005 projects:

Character recognition

I spent a bunch of time this fall working on a character recognition program. The main point of the project was to test some theories I had about AI (both about some specific approaches to AI, and also some more general theories about how an AI project should be structured). In order of decreasing importance, here's what I've learned:

Mistaken identity
Tags: singularity
Tue, 11/15/2005

In Singularitarian / SL4 circles, the question of identity comes up a fair bit, often in relation to "uploading". A typical scenario goes like this: we develop upload technology that can scan your brain and make an exact functional copy on a computer. If the upload process destroys your original brain, is the version on the computer still "you"? What if the upload doesn't destroy the original? What if you make 25 copies of the uploaded brain?

Read more...Collapse )

What I believe about the Singularity
Tags: personal singularity
Sat, 11/05/2005

1 The Singularity will occur within the next few decades.

1.1 We will soon develop human-equivalent AI, either by brute-force simulation of the human brain, or by a more traditional engineered approach.

Let's build a brain!
Tags: ai
Sat, 11/05/2005

How hard would it be to build a human-level AI?

One interesting way of looking at this problem is to examine how hard it would be to run a computer simulation of a human brain.

Read more...Collapse )

The rapture for nerds
Tags: singularity
Mon, 10/10/2005

Ellen commented the other day that the Singularity is essentially the nerd equivalent of the Rapture, which is an interesting way of looking at it.

The Sony Librie
Sun, 10/09/2005

Now for something completely different... because I'm a total geek, I just bought a Sony Librie.

The quick version:

  1. E-book technology has finally arrived. The Librie is a flawed but viable device.
  2. However, getting content for it is pretty much a showstopper. You probably shouldn't buy it.

Further thoughts on immortality
Tags: life extension
Sat, 10/08/2005

Another point occurs to me, having to do with demographics. Currently, certain demographic groups (for example, certain fundamentalist religious groups) have much higher birthrates than the population at large. The net result is a slow increase in the proportion of the population belonging to those groups, which may or may not be counteracted by other factors. However, the development of radical life extension would likely greatly accelerate the rate of that drift.

The transparent laboratory
Tags: transhumanism
Fri, 10/07/2005

I've just started reading David Brin's The Transparent Society. I've been recommending it to people for years as one of the most important books on its particular topic, but in spite of that, I've somehow never gotten around to reading it. His basic premise is that rather than trying to preserve our privacy, we should accept the inevitability of losing it, and instead focus on creating a "transparent society", where freedom is based on transparency rather than privacy. For example, although the police might be able to track our every move using street-mounted surveillance systems, we would have the ability to monitor what they were tracking. The basic idea is that abuses are largely prevented by the fact that they cannot occur in secret. It's a very interesting idea, and one which I think I'm coming to favor.

Read more...Collapse )

The Singularity is near
Tags: singularity
Fri, 10/07/2005

Through a combination of good luck and low cunning, I've managed to get my sticky little fingers on an early copy of Kurzweil's latest book (many thanks to SIAI and AC2005 for providing my copies!)

The bottom line: it's everything I'd hoped for. Just as Spiritual Machines was pretty much an updated replacement for Intelligent Machines, The Singularity Is Near pretty much updates and replaces Spiritual Machines. Here, in no particular order, are my impressions from reading it:

Read more...Collapse )