Vernor Vinge’s Latest Ideas About The Singularity In Ieee Spectrum
He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability. Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth. Ray Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends Moore’s law from integrated circuits to earlier transistors, vacuum tubes, relays, and electromechanical computers.
However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moore’s Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible.
But my favorite part by far is an essay by Vinge, an SF author and computer scientist whose singularity scenarios in his novels are both compelling and realistic. He breaks down singularity scenarios into the five most-likely possibilities, any of which he thinks could happen by 2030. Black holes are often described as dangerous destructive entities that never give up what falls into their grasp. But what if black holes are protective—shielding us from the unpredictable effects of places where our physical understanding of the universe breaks down? This question might sound flippant, but in fact, it is at the heart of a decades-long physics puzzle known as “cosmic censorship,” one that researchers may finally be close to answering. Moore’s law, which is based on the observation that t the number of transistors in a dense integrated circuit double about every two years, implies that cost of computing halves approximately every 2 years.
Of the respondents, 12% said it was “quite likely”, 17% said it was “likely”, 21% said it was “about even”, 24% said it was “unlikely” and 26% said it was “quite unlikely”. This is a geometric point in space where the compression of mass is infinite density and zero volume. Space-time curves infinitely, gravity is infinite, and the laws of physics cease to function. There are also discussions about adding superintelligence capabilities to humans. These include brain-computer interfaces, biological alteration of the brain, brain implants and genetic engineering.
A speed superintelligence describes an AI that can do everything that a human can do, where the only difference is that the machine runs faster. For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. Such a difference in information processing speed could drive the singularity.
Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate — say — one million times slower than you, there is little doubt that over a period of years you could come up with a way to escape. I call this “fast thinking” form of superintelligence “weak superhumanity.” Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time.
Some intelligence technologies, like “seed AI”, may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on. Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.
Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Although technological progress has been accelerating in most areas , it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.