Monday, September 8, 2014


The first part of the famous article by Vernor Vinge
We are on the verge of great change. All that was thought to need "thousands of centuries", will happen in the next hundred years. And despite all the optimism, I would be more comfortable if these supernatural events have separated a thousand years, not twenty.

Technological Singularity
The original version of this article, a mathematician and author Vernor Vinge presented at the symposium VISION-21, which was conducted in 1993 by the Center for Space Research NASA's. Lewis and Ohio Aerospace Institute. In 2003, the author added the article comments.

What is a "singularity"?
The acceleration of technological progress - the main feature of the XX century. We are on the verge of change comparable to the advent of man on Earth. Augmented reason for these changes is that the development of technology will inevitably lead to the creation of entities with intelligence greater than human.
Science can achieve a breakthrough in different ways (and this is another argument in favor of that breakthrough will happen):

1. Computers will gain "consciousness", and there will be a superhuman intelligence. (At the moment there is no consensus about whether we can create a machine equal to man, however, if it will, of course, soon you can then design a more intelligent beings).

2. Large computer networks (and their combined users) can "see themselves" as a superhumanly intelligent entity.

3. machine-human interface will be so close that the intelligence, users can be reasonably regarded as superhuman.

4. biology can provide us with the means to improve natural human intellect.

The first three options are directly linked with the improvement of computer hardware. [In fact, the fourth feature is also dependent on this, albeit indirectly.]
Progress hardware for the past several decades is strikingly ctabilen. Based on this trend, I believe that the intelligence, superior man, will appear in the next thirty years.
(Charles Platt noticed that AI enthusiasts make similar statements for thirty years. Not to be unfounded, escaped with relative time ambiguity, let me clarify: I wonder if it will happen before 2005 or after 2030). [Ten years have passed, but I still think that this period is valid.]

What are the consequences of this event?
When progress will be sent to the intellect, a superior man, he will be much faster. fact, there is reason to believe that progress will not produce more and more reasonable spirit more and more rapidly.
The best analogy is that you can spend here - in the evolutionary past.Animals can adapt and be creative, but no faster than natural selection works. In the case of natural selection, the world itself acts as its own simulator.
 We, the people, have the ability to absorb the world and to build in my head cause-and-effect relationships, so we solve many problems thousands of times faster than the mechanism of natural selection. When it will be possible to calculate these models at higher speeds, we will enter a mode that is different from our human past no less radical than we, the people themselves are different from the lower animals.

Such an event is canceled useless the entire set of human laws, possibly in the blink of an eye. Uncontrollable chain reaction will begin to develop exponentially beyond any hope of regaining control of the situation.Changes, which were thought to require "thousands of centuries" (if they occur at all) is likely to happen in the next hundred years.

Justifiably be called the event a singularity (even Singularity, on a plan of this essay). This is the point at which our older models will drop, where reigns the new reality.
This is the world, the outlines of which will become more and more clearly, advancing on modern humanity, until this new reality does not overshadow the surrounding reality, becoming commonplace.
  And yet, when we are such a point, finally, reach, this event will still be a great surprise and even greater uncertainty.
 In the fifties, few foresaw this. Stan Yulam retold once the words of John von Neumann : "One talked about continuously accelerating technological progress and changes in the lifestyle of the people who give the impression of approaching some essential singularity in the history of human race, in which all human affairs, in the form in which we know them, could not continue. "

Von Neumann even uses the term "singularity" , although it looks like he was thinking of normal progress, not the creation of superhuman intelligence. (In my own opinion, in superhuman lies the essence of the Singularity, without it we got fed up with excess technical resources, and not being able to digest them properly).

In the sixties, there has been some signs of involvement in our lives superhuman intelligence. Irving John Hood wrote: "We define the supramental machine as a machine that is able to significantly surpass all the intellectual activities of any man, however intelligent he may be. Since the ability to develop such a machine is also one of these intellectual activities supramental machine can build even more sophisticated machines. Behind this, of course, be followed by "intellectual explosion," and the human mind is much lag behind the artificial [hereinafter cited three earlier works Hood]. Thus, the first car will be the supramental latest invention, which fall to the lot of man, with the proviso that the car will have enough humility and tell us how to keep it under control ... and the likelihood that in the twentieth century supramental machine will be built and will be the last invention that will make people higher than the probability that it will not happen. "

Hood captured the essence of the acceleration of progress, but did not consider it the most disturbing consequences. Any intelligent machine of the type that he describes will not "tool" of humanity in the same way as people do not become tools of rabbits, robins or chimpanzees.

In the sixties, seventies and eighties, the awareness of the coming cataclysm shirilos and grew. Probably the first of its influence felt fiction .In the end, because it is science fiction relies talk about what will turn us all of these technologies. Increasingly, writers pursued the feeling that the future is hidden behind a wall. Once imagination easily shifted their millions of years to come. Now they find that their most fervent forecasts relate to the unknown ... tomorrow. First Galactic Empire might seem postchelovecheskimim realities. Now, sadly, even interplanetary state presented as such.

[In fact, now, at the beginning of the twenty-first century novels about the adventures of space divided into groups depending on how they cope with the authors of the probability of occurrence of superhuman machines.Fiction use an arsenal of tricks to prove their inability to hold or at a safe distance from the scene.]

So what is waiting for us in the two or three decades, until we move to the edge? As the Singularity will be approved in the human perception of the world?
Until a certain time, the press will have a good critique of machine intelligence. In the end, until the hardware is comparable to the power of the human brain, it is naive to hope to create intelligence comparable to the human mind (or better it).
(There are implausible possibility that reach the level of the human mind and can be on a less powerful "iron", if we are willing to sacrifice speed and we want an artificial creature, which will be slow in the most literal sense of the word. But most of all, the composition of the program for it would be a non-trivial occupation and will not do without many false starts and experimentation.
 And if so, then the era of machines endowed with self-consciousness, will not come until the designed hardware, which has significantly more power than a natural person outfit).

However, over time we will see more and more symptoms. Dilemma, which have experienced science fiction, will be seen in the context of the creative efforts of a different kind. (I know that thoughtful comic book writers already worried about how the need to create visual effects when everything visible around would be possible to display with commonly available hardware.)
We will see how to automate tasks will gradually more and more high-level. Already, there are tools (programs of symbolic logic, CAD) that release us from most of the tedious routine. There is a downside: a truly productive work becomes a lot steadily declining narrow elite mankind.With the advent of the Singularity, we will see how to finally come true predictions about this technological unemployment.

Another sign of movement toward the Singularity: ideas themselves should spread faster, and even the most radical of them will instantly become available to the public.

And what will be the most offensive of the Singularity? What can be said about the true nature of this event?
As it comes to Intelligent acceleration, probably it would be the most rapid technological revolution of all previously known to us . Fall, most likely, out of the blue - even the scientists involved in the process.("But all our previous models did not move We just twirled some settings ...") If the network is widely distributed (ubiquitous embedded systems), it may seem as if our artifacts suddenly gained consciousness.

And then what will happen in a month or two (or a day or two) after that? There is only one analogy that I can spend - the emergence of mankind. We find ourselves in a posthuman era. And despite all their technological optimism, I would have been much more comfortable if I had these supernatural events separated by a thousand years, not twenty.
How to avoid the Singularity?
The shift to a superhuman control can take as little as a few hundred hours, but may take on a decade. You can not judge with certainty what exactly will be the same directional push.

How to avoid the Singularity?  (Technological Singularity. Part 2)

The second part of the famous article by mathematician and author Vernor Vinge "Technological Singularity".

Well, maybe the Singularity does not occur.
Sometimes I try to imagine the signs, according to which we will be clear that the Singularity can not wait.
 There are popular and recognized arguments Penrose and Searle about the impracticality of machine intelligence.
In August 1992, the community "thinking machines" arranged a brainstorming session to check the thesis: "How to build a thinking machine?"
 As you may have guessed, this promise should be that the participants in the experiment not too supported the very same arguments against machine intelligence.
 In fact, taken a general agreement that the mind can exist on nonbiological basis, and that the algorithms are an essential component for a reason.
However, a heated debate about the existence of organic brain hardware power in its purest form.
 The minority was of the view that the largest 1992 computers power behind the human brain is three orders. The majority of the participants agree with the calculations of Hans Moravec , in which it appeared that the hardware parity in this matter between us and another ten to forty years.
 And yet there was another minority is assumed that the computational power of individual neurons can be much better than is generally assumed.
 If so, then our modern computers behind by as much as ten orders of magnitude from that of equipment, which is hidden in our cranium.
If this thesis is true (or, in this case, if the views of Penrose and Searle justified), we may never live to see the Singularity. Instead, at the beginning of the XXI century is found that rises steeply curves of the performance of our hardware will begin to flatten due to our inability to automate the design work on the development of further improvements in hardware.
It's over some very powerful computer, but without the ability to move forward. Commercial digital signal processing is amazing, providing an analog output that is comparable to digital operations, but "consciousness" is awakened and intelligent acceleration, is a very essence of the Singularity, and will begin.
This state of affairs is likely to be regarded as the Golden Age ... and the end of progress. It will be something very similar to the future predicted byGunther Stent , who has made ​​it clear, talking about the idea of creating a superhuman mind that this will be enough to ensure that its predictions have not come true.

In the preceding paragraph does not get what I consider the strongest argument against the possibility of technological singularity: even if we are able to create computers with pure hardware power will probably not be able to organize your components so that the machine has become superhuman intelligence. For Tehnomaniya-mechanists it seems, will result in something like "failure to address the problem of complexity of the software." Attempts will be made ​​to run more and more large software projects, but programming is not up to the task, and we never take possession of the secrets of biological models that could help bring to life the "training" and "embryonic development" machines. In the end, the following semi-fantastic counterpoint Murphy to Moore's Law: "The maximum possible efficiency of a software system grows in proportion to the logarithm of the efficiency (ie, speed, bandwidth, memory capacity) to be software." In this world without singularity future programmers sad and hopeless. (Imagine the need to overcome accumulated over the centuries inherited the program!) So in the coming years, I believe, should pay special attention to two important trends: progress in major projects in software development and progress in the application of biological paradigms in large-scale networks and large-scale parallel systems .] 

But if the technological Singularity meant to be, it will happen .Even if all the nations of the world are aware of "threat" and scared to death, progress stops. Competitive advantage - economic, military, even in the sphere of art - any advances in automation tools is so irresistible, that the prohibition of such technologies simply ensures that someone another master them first.

Eric Drexler has made ​​impressive forecasts of development and improvement of technology. He agrees that the appearance of superhuman intelligence will be possible in the near future. But Drexler challenges the world's ability to maintain control over such a superhuman devices that the results of their work can be assessed and reliable to use.

I do not agree that maintaining control so impossible.
 Imagine yourself trapped in your own home with a single, limited by certain your hosts access channel information from the outside.
If these hosts conceive at a rate of, say, a million times slower than you, hardly be doubted that in a few years (your time), you have invented a way to escape.
I call this "quick minded" form superintelligence "weak superhuman." This "weak superhuman entity" over a period of time would speed up the equivalent of the human mind. It is difficult to say exactly what will consist of a "strong superhuman", but the difference seems to be striking.
Imagine a dog with a tremendously speed up the work of thought. Will the thousand-year experience of the dog thinking that something has to give to mankind? Many of the assumptions about the Overmind seems to be based on a model of "weak superhuman."
I think the most correct guesses about the post-singular world can be built on assumptions about the structure of a "strong superhuman." To this question we shall return.

Another approach to the problem of maintaining control is the idea of creating artificial restrictions on the freedom of action designed superhuman entity. [ for example, the laws of robotics in Asimov ].
I believe that any rules strict enough to ensure their effectiveness, will result in the creation of device capabilities, obviously narrower than that of the unconstrained limited versions (thus, competitiveness will contribute to the development of more dangerous models).

If the Singularity can not prevent or limit, as far as can be brutal posthuman era?
 Well, pretty brutal. Physical extinction of the human race - one of the possible consequences. (Or, to put it as Eric Drexler , speaking about nanotechnology: all such technical possibilities, probably the government will decide that ordinary citizens they are no longer needed ).
However, the physical extinction may not be the most terrible consequence. Think about all sorts of our relationship to animals.
In the post-human world will still be a lot of niches in which ekvivlentnaya human endurance will be demanded: embedded systems in the self-governing units, autonomous demons lower functionality in larger sentient beings. ("Strong superhuman, apparently, will be a commonality of the Mind of a few very smart components".)
Some of these human equivalents may be used exclusively for the digital signal processing. Others may remain very humanoid, although specialized, narrow profiling, because of which today would put them in a psychiatric hospital. Despite the fact that none of these creatures can not be already people of flesh and blood, they will remain closest to us modern man in the new environment.

I'm sure Irving Hood would that say about this (even though I did not find it mentions something like that). Hood suggested metapravilo gold, which can be reformulated as follows: "Contact with smaller brothers the way you want older brothers treated you." This is a wonderful paradox (and most of my friends do not believe it), as the consequences, calculated according to the theory of games, articulate trudno.Tem Still, if we could follow this rule, in a sense, it could be talking about the prevalence of such good intentions in the universe .]

I have already expressed the above question is that we can not prevent the Singularity, that its coming is an inevitable consequence of the natural human competitiveness and opportunities inherent in technology. And yet we - initiators. Even the greatest avalanche caused by the smallest actions.We are free to set the initial conditions so that everything happened for us with the least damage.

Will use from foresight and thoughtful planning may depend on how the technological singularity will occur. Will it be "abrupt transition" or "peaceful transition." The abrupt transition - is the one in which the shift to a superhuman control happen for a few hundred hours (as in "Blood Music" by Greg Baer). It seems to me that the plan something based on the abrupt transition would be extremely difficult. He will be like an avalanche, which I wrote this essay in 1993. The most dreadful form of an abrupt transition may be due to an arms race, when the two powers are pushing their individual "Manhattan Project" in order to achieve superhuman strength. The equivalent of decades of spying on the human level can be compressed in the last few hours of the existence of the race, and all human control and discretion succumb to by some extremely destructive purposes.
On the other hand, the "quiet transfer" may take decades, perhaps a century. This situation seems to be more amenable to planning and thoughtful experimentation. Hans Moravec is considering such a "smooth transition" in the book "Robot from simple machines to Overmind"

Of course (as in the case of avalanches), you can not judge with certainty what it will actually be the same guiding impetus

No comments:

Post a Comment

Search This Blog