20101121

Why Work Toward the Singularity?

If you travelled backward in time to witness a critical moment in the invention of science, or the creation of writing, or the evolution of Homo sapiens, or the beginning of life on Earth, no human judgement could possibly encompass all the future consequences of that event – and yet there would be the feeling of being present at the dawn of something worthwhile. The most critical moments of history are not the closed stories, like the start and finish of wars, or the rise and fall of governments. The story of intelligent life on Earth is made up of beginnings.

Imagine traveling back in time to witness a critical moment in the dawn of human intelligence. Suppose that you find an alien bystander already on the scene, who asks: "Why are you so excited? What does it matter?" The question seems almost impossible to answer; it demands a thousand answers, or none. Someone who valued truth and knowledge might answer that this was a critical moment in the human quest to learn about the universe – in fact, the beginning of that quest. Someone who valued happiness might answer that the rise of human intelligence was a necessary precursor to vaccines, air conditioning, and the many other sources of happiness and solutions to unhappiness that have been produced by human intelligence over the ages. There are people who would answer that intelligence is meaningful in itself; that "It is better to be Socrates unsatisfied than a fool satisfied; better to be a man unsatisfied than a pig satisfied." A musician who chose that career believing that music is an end in itself might answer that the rise of human intelligence mattered because it was necessary to the birth of Bach; a mathematician could single out Euclid; a physicist might cite Newton or Einstein. Someone with an appreciation of humanity, beyond the individual humans, might answer that this was a critical moment in the relation of life to the universe – the beginning of humanity's growth, of our acquisition of strength and understanding, eventually spreading beyond Earth to the rest of the galaxy and the universe.

The beginnings of human intelligence, or the invention of writing, probably went unappreciated by the individuals who were present at the time. But such developments do not always take their creators unaware. Francis Bacon, one of the critical figures in the invention of the scientific method, made astounding claims about the power and universality of his new mode of reasoning and its ability to improve the human condition – claims which, from the perspective of a 21st-century human, turned out to be exactly right. Not all good deeds are unintentional. It does occasionally happen that humanity's victories are won not by accident but by people making the right choices for the right reasons.

Why is the Singularity worth doing? The Singularity Institute for Artificial Intelligence can't possibly speak for everyone who cares about the Singularity. We can't even presume to speak for the volunteers and donors of the Singularity Institute. But it seems like a good guess that many supporters of the Singularity have in common a sense of being present at a critical moment in history; of having the chance to win a victory for humanity by making the right choices for the right reasons. Like a spectator at the dawn of human intelligence, trying to answer directly why superintelligence matters chokes on a dozen different simultaneous replies; what matters is the entire future growing out of that beginning.

But it is still possible to be more specific about what kinds of problems we might expect to be solved. Some of the specific answers seem almost disrespectful to the potential bound up in superintelligence; human intelligence is more than an effective way for apes to obtain bananas. Nonetheless, modern-day agriculture is very effective at producing bananas, and if you had advanced nanotechnology at your disposal, energy and matter might be plentiful enough that you could produce a million tons of bananas on a whim. In a sense that's what nanotechnology is – good-old-fashioned material technology pushed to the limit. This only begs the question of "So what?", but the Singularity advances on this question as well; if people can become smarter, this moves humanity forward in ways that transcend the faster and easier production of more and more bananas. For one thing, we may become smart enough to answer the question "So what?"

In one sense, asking what specific problems will be solved is like asking Benjamin Franklin in the 1700s to predict electronic circuitry, computers, Artificial Intelligence, and the Singularity on the basis of his experimentation with electricity. Setting an upper bound on the impact of superintelligence is impossible; any given upper bound could turn out to have a simple workaround that we are too young as a civilization, or insufficiently intelligent as a species, to see in advance. We can try to describe lower bounds; if we can see how to solve a problem using more or faster technological intelligence of the kind humans use, then at least that problem is probably solvable for genuinely smarter-than-human intelligence. The problem may not be solved using the particular method we were thinking of, or the problem may be solved as a special case of a more general challenge; but we can still point to the problem and say: "This is part of what's at stake in the Singularity."

If humans ever discover a cure for cancer, that discovery will ultimately be traceable to the rise of human intelligence, so it is not absurd to ask whether a superintelligence could deliver a cancer cure in short order. If anything, creating superintelligence only for the sake of curing cancer would be swatting a fly with a sledgehammer. In that sense it is probably unreasonable to visualize a significantly smarter-than-human intelligence as wearing a white lab coat and working at an ordinary medical institute doing the same kind of research we do, only better, in order to solve cancer specifically as a problem. For example, cancer can be seen as a special case of the more general problem "The cells in the human body are not externally programmable." This general problem is very hard from our viewpoint – it requires full-scale nanotechnology to solve the general case – but if the general problem can be solved it simultaneously solves cancer, spinal paralysis, regeneration of damaged organs, obesity, many aspects of aging, and so on. Or perhaps the real problem is that the human body is made out of cells or that the human mind is implemented atop a specific chunk of vulnerable brain – although calling these problems raises philosophical issues not discussed here.

Singling out "cancer" as the problem is part of our culture's particular outlook and technological level. But if cancer or any generalization of "cancer" is solved soon after the rise of smarter-than-human intelligence, then it makes sense to regard the quest for the Singularity as a continuation by other means of the quest to cure cancer. The same could be said of ending world hunger, curing Alzheimer's disease, or placing on a voluntary basis many things which at least some people would regard as undesirable: illness, destructive aging, human stupidity, short lifespans. Maybe death itself will turn out to be curable, though that would depend on whether the laws of physics permit true immortality. At the very least, the citizens of a post-Singularity civilization should have an enormously higher standard of living and enormously longer lifespans than we see today.

What kind of problems can we reasonably expect to be solved as a side effect of the rise of superintelligence; how long will it take to solve the problems after the Singularity; and how much will it cost the beneficiaries? A conservative version of the Singularity would start with the rise of smarter-than-human intelligence in the form of enhanced humans with minds or brains that have been enhanced by purely biological means. This scenario is more "conservative" than a Singularity which takes place as a result of brain-computer interfaces or Artificial Intelligence, because all thinking is still taking place on neurons with a characteristic limiting speed of 200 operations per second; progress would still take place at a humanly comprehensible speed. In this case, the first benefits of the Singularity probably would resemble the benefits of ordinary human technological thinking, only more so. Any given scientific problem could benefit from having a few Einsteins or Edisons dumped into it, but it would still require time for research, manufacturing, commercialization and distribution.

Human genius is not the only factor in human science, but it can and does speed things up where it is present. Even if intelligence enhancement were treated solely as a means to an end, for solving some very difficult scientific or technological problem, it would still be worthwhile for that reason alone. The solution might not be rapid, even after the problem of intelligence enhancement had been solved, but that assumes the conservative scenario, and the conservative scenario wouldn't last long. Some of the areas most likely to receive early attention would be technologies involved in more advanced forms of superintelligence: broadband brain-computer interfaces or full-fledged Artificial Intelligence. The positive feedback dynamic of the Singularity – smarter minds creating still smarter minds – doesn't need to wait for an AI that can rewrite its own source code; it would also apply to enhanced humans creating the next generation of Singularity technologies.

The Singularity creates speed for two reasons: First, positive feedback – intelligence gaining the ability to improve intelligence directly. Second, the shift of thinking from human neurons to more readily expandable and enormously faster substrates. A brain-computer interface would probably offer a limited but real version of both capabilities; the external brainpower would be both fast and programmable, although still yoked to an ordinary human brain. A true Artificial Intelligence, or a human scanned completely into a sufficiently advanced computer, would have total self-access. At this point one begins to deal with superintelligence as the successor to current scientific research, the global economy, and in fact the entire human condition; rather than a superintelligence plugging into the current system as an improved component. At this point human nature sometimes creates an "Us Vs. Them" view of the situation – the instinct that people who are different are therefore on a different side – but if humans and superintelligences are playing on the same team, it would be straightforward for the most advanced mind at any given time to offer a helping hand to anyone lagging behind; there is no technological reason why humans alive at the time of the Singularity could not participate in it directly. In our view this is the chief benefit of the Singularity to existing humans; not technologies handed down from above but a chance to become smarter and participate directly in creating the future.

One idea that is often discussed along with the Singularity is the proposal that, in human history up until now, it has taken less and less time for major changes to occur. Life first arose around three and half billion years ago; it was only eight hundred and fifty million years ago that multi-celled life arose; only sixty-five million years since the dinosaurs died out; only five million years since the hominid family split off within the primate order; and less than a hundred thousand years since the rise of Homo sapiens sapiens in its modern form. Agriculture was invented ten thousand years ago; Socrates lived two and half thousand years ago; the printing press was invented five hundred years ago; the computer was invented around sixty years ago. You can't set a speed limit on the future by looking at the pace of past changes, even if it sounds reasonable at the time; history shows that this method produces very poor predictions. From an evolutionary perspective it is absurd to expect major changes to happen in a handful of centuries, but today's changes occur on a cultural timescale, which bypasses evolution's speed limits. We should be wary of confident predictions that transhumanity will still be limited by the need to seek venture capital from humans or that Artificial Intelligences will be slowed to the rate of their human assistants (both of which I have heard firmly asserted on more than one occasion).

We can't see in advance the technological pathway the Singularity will follow, since if we were that smart ourselves we'd already have done it. But it's possible to toss out broad scenarios, such as "A smarter-than-human AI absorbs all unused computing power on the then-existent Internet in a matter of hours; uses this computing power and smarter-than-human design ability to crack the protein folding problem for artificial proteins in a few more hours; emails separate rush orders to a dozen online peptide synthesis labs, and in two days receives via FedEx a set of proteins which, mixed together, self-assemble into an acoustically controlled nanodevice which can build more advanced nanotechnology." This is not a smarter-than-human solution; it is a human imagining how to throw a magnified, sped-up version of human design abilities at the problem. There are admittedly initial difficulties facing a superfast mind in a world of slow human technology. Even humans, though, could probably solve those difficulties, given hundreds of years to think about it. And we have no way of knowing that a smarter mind can't find even better ways.

If the Singularity involves not just a few smarter-than-usual researchers plugging into standard human organizations, but the transition of intelligent life on Earth to a smarter and rapidly improving civilization with an enormously higher standard of living, then it makes sense to regard the quest to create smarter minds as a means of directly solving such contemporary problems as cancer, AIDS, world hunger, poverty, et cetera. And not just the huge visible problems; the huge silent problems are also important. If modern-day society tends to drain the life force from its inhabitants, that's a problem. Aging and slowly losing neurons and vitality is a problem. In some ways the basic nature of our current world just doesn't seem very pleasant, due to cumulative minor annoyances almost as much as major disasters. This may usually be considered a philosophical problem, but becoming smarter is something that can actually address philosophical problems.

The transformation of civilization into a genuinely nice place to live could occur, not in some unthinkably distant million-year future, but within our own lifetimes. The next leap forward for civilization will happen not because of the slow accumulation of ordinary human technological ingenuity over centuries, but because at some point in the next few decades we will gain the technology to build smarter minds that build still smarter minds. We can create that future and we can be part of it.

If there's a Singularity effort that has a strong vision of this future and supports projects that explicitly focus on transhuman technologies such as brain-computer interfaces and self-improving Artificial Intelligence, then humanity may succeed in making the transition to this future a few years earlier, saving millions of people who would have otherwise died. Around the world, the planetary death rate is around fifty-five million people per year (UN statistics) - 150,000 lives per day, 6,000 lives per hour. These deaths are not just premature but perhaps actually unnecessary. At the very least, the amount of lost lifespan is far more than modern statistics would suggest.

There are also dangers for the human species if we can't make the breakthrough to superintelligence reasonably soon. Albert Einstein once said: "The problems that exist in the world today cannot be solved by the level of thinking that created them." We agree with the sentiment, although Einstein may not have had this particular solution in mind. In pointing out that dangers exist it is not our intent to predict a dystopian future; so far, the doomsayers have repeatedly been proven wrong. Humanity has faced the future squarely, rather than running in the other direction as the doomsayers wished, and has thereby succeeded in avoiding the oft-predicted disasters and continuing to higher standards of living. We avoided disaster by inventing technologies which enable us to cope with complex futures. Better, more sustainable farming technologies have enabled us to support the increased populations produced by modern medicine. The printing press, telegraph, telephone, and now the Internet enable humanity to apply its combined wisdom to problem-solving. If we'd been forced to move into the future without these technologies, disaster probably would have resulted. The technology humanity needs to cope with the coming decades may be the technology of smarter-than-human intelligence. If we have to face challenges like basement laboratories creating lethal viruses or nanotechnological arms races with just our human intelligence, we may be in trouble.

Finally, there is the integrity of the Singularity itself to safeguard. This is not necessarily the most difficult part of the challenge, compared to the problem of creating smarter-than-human intelligence in the first place, but it needs to be considered.It is possible that the integrity of the Singularity needs no safeguarding; that any human from Gandhi to Stalin, if enhanced sufficiently far beyond human intelligence, would end up being wiser and more moral than anyone alive today; that the same holds true for all minds-in-general from enhanced chimpanzees to arbitrarily constructed Artificial Intelligences. But this is not something we know in advance. Since we don't know how many moral errors persist in our own civilization, safeguarding the integrity of the Singularity – in our view – consists more of ensuring the will and ability to grow wiser with increased intelligence than of trying to find perfect candidates for human intelligence enhancement. An analogous problem exists for Artificial Intelligence, where the task is not enforcing servitude on the AI or coming up with a perfect moral code to "hardwire", but rather transferring over the features of human cognition that let us conceive of a morality improving over time (see the section on Friendly Artificial Intelligence for more information).

Safeguarding the integrity of the Singularity is another reason for facing the challenge of the Singularity squarely and deliberately. It may be that human intelligence enhancement will turn out well regardless, but there is still no point in taking unnecessary risks by driving the projects underground. If human intelligence enhancement is banned by the FDA, for example, this just means that the first experiments will take place outside the US, slightly later than they otherwise would have; increasing the possible risks, delaying the possible benefits. If human intelligence enhancement is banned by the UN this means the experiments will take place offshore, out of the public eye, and perhaps sponsored by groups that we would prefer not be involved – although there is a significant chance it would turn out well regardless. In the case of Artificial Intelligence there are certain specific things that must be done to place the AI in the same moral "frame of reference" as humanity – to ensure the AI absorbs our virtues, corrects any inadvertently absorbed faults, and goes on to develop along much the same path as a recursively self-improving human altruist. Friendly Artificial Intelligence is not necessarily more difficult than the problem of AI itself, but it does need to be handled along with the creation of Artificial Intelligence. In both cases, we can best safeguard the integrity of the Singularity by confronting the Singularity intentionally and with full awareness of the responsibilities involved.

What does it mean to confront the Singularity? Despite the enormity of the Singularity, sparking the Singularity – creating the first smarter-than-human intelligence – is a problem of science and technology. The Singularity is something that we can actually go out and do, not a philosophical way of describing something that inevitably happens to humanity. It takes the sweep of human progress and a whole technological economy to create the potential for the Singularity, just as it takes the entire framework of science to create the potential for a cancer cure, but it also takes a deliberate effort to run the last mile and fulfill that potential. If someone asks you if you're interested in donating to AIDS research, you might reply that you believe that cancer research is relatively underfunded and that you are donating there instead; you would probably not say that by working as a stockbroker you support the world economy in general and thereby contribute as much to humanity's progress toward an AIDS cure as anyone. In that sense, sparking the Singularity is no different from any other grand challenge – someone has to do it.

At this moment in time, there is a tiny handful of people who realize what's going on and are trying to do something about it. It is not quite true that if you don't do it, no one will, but the pool of other people who will do it if you don't is smaller than you might think. If you're fortunate enough to be one of the few people who currently know what the Singularity is and would like to see it happen – even if you learned about the Singularity just now – we need your help because there aren't many people like you. This is the one place where your efforts can make the greatest possible difference – not just because of the tremendous stakes, though that would be far more than enough in itself, but because so few people are currently involved.

The Singularity Institute exists to carry out the mission of the Singularity-aware – to accelerate the arrival of the Singularity in order to hasten its human benefits; to close the window of vulnerability that exists while humanity cannot increase its intelligence along with its technology; and to protect the integrity of the Singularity by ensuring that those projects which finally implement the Singularity are carried out in full awareness of the implications and without distraction from the responsibilities involved. That's our dream. Whether it actually happens depends on whether enough people take the Singularity seriously enough to do something about it – whether humanity can scrape up the tiny fraction of its resources needed to face the future deliberately and firmly.

We can do better. The future doesn't have to be the dystopia promised by doomsayers. The future doesn't even have to be the flashy yet unimaginative chrome-and-computer world of traditional futurism. We can become smarter. We can step beyond the millennia-old messes created by human-level intelligence. Humanity can solve its problems – both the huge visible problems everyone talks about and the huge silent problems we've learned to take for granted. If the nature of the world we live in bothers you, there is something rational you can do about it. We can do better with your support.

Don't be a bystander at the Singularity. You can direct your effort at the point of greatest impact – the beginning.

No comments:

Post a Comment