Technological Singularity - History of The Idea

History of The Idea

In the middle of the 19th century Friedrich Engels wrote that science moves forward proportionally to the "mass of knowledge" inherited from the previous generations, he proposed a more formal mathematical concept that, since the 16th century, the development of the sciences had been increasing proportionally to the squared distance in time from its start.

In 1847, R. Thornton, the editor of The Expounder of Primitive Christianity, wrote about the recent invention of a four function mechanical calculator:

...such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!

In 1951, Alan Turing spoke of machines outstripping humans intellectually:

once the machine thinking method has started, it would not take long to outstrip our feeble powers. ... At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon.

In the mid fifties Stanislaw Ulam had a conversation with John von Neumann in which von Neumann spoke of "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

In 1965, I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a singularity).

In 1983, mathematician and author Vernor Vinge greatly popularized Good’s notion of an intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines, writing:

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.

In 1984, Samuel R. Delany used "cultural fugue" as a plot device in his science fiction novel Stars in My Pocket Like Grains of Sand; the terminal runaway of technological and cultural complexity in effect destroys all life on any world on which it transpires, a process which is poorly understood by the novel's characters, and against which they seek a stable defense. In 1985 Ray Solomonoff introduced the notion of "infinity point" in the time scale of artificial intelligence, analyzed the magnitude of the "future shock" that "we can expect from our AI expanded scientific community" and on social effects. Estimates were made "for when these milestones would occur, followed by some suggestions for the more effective utilization of the extremely rapid technological growth that is expected."

Vinge also popularized the concept in SF novels such as Marooned in Realtime (1986) and A Fire Upon the Deep (1992). The former is set in a world of rapidly accelerating change leading to the emergence of more and more sophisticated technologies separated by shorter and shorter time intervals, until a point beyond human comprehension is reached. The latter starts with an imaginative description of the evolution of a superintelligence passing through exponentially accelerating developmental stages ending in a transcendent, almost omnipotent power unfathomable by mere humans. It is also implied that the development does not stop at this level.

In his 1988 book Mind Children, computer scientist and futurist Hans Moravec generalizes Moore's law to make predictions about the future of artificial life. Moravec outlines a timeline and a scenario in this regard, in that the robots will evolve into a new series of artificial species, starting around 2030–2040. In Robot: Mere Machine to Transcendent Mind, published in 1998, Moravec further considers the implications of evolving robot intelligence, generalizing Moore's law to technologies predating the integrated circuit, and speculating about a coming "mind fire" of rapidly expanding superintelligence, similar to Vinge's ideas.

A 1993 article by Vinge, "The Coming Technological Singularity: How to Survive in the Post-Human Era", was widely disseminated on the internet and helped to popularize the idea. This article contains the oft-quoted statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge refines his estimate of the time scales involved, adding, "I'll be surprised if this event occurs before 2005 or after 2030."

Vinge predicted four ways the singularity could occur:

  1. The development of computers that are "awake" and superhumanly intelligent.
  2. Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
  3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  4. Biological science may find ways to improve upon the natural human intellect.

Vinge continues by predicting that superhuman intelligences will be able to enhance their own minds faster than their human creators. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period, and that the creation of superhuman intelligence represented a breakdown in humans' ability to model their future. His argument was that authors cannot write realistic characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express. Vinge named this event "the Singularity".

Damien Broderick's popular science book The Spike (1997) was the first to investigate the technological singularity in detail.

In 2000, Bill Joy, a prominent technologist and founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.

In 2005, Ray Kurzweil published The Singularity is Near, which brought the idea of the singularity to the popular media both through the book's accessibility and a publicity campaign that included an appearance on The Daily Show with Jon Stewart. The book stirred intense controversy, in part because Kurzweil's utopian predictions contrasted starkly with other, darker visions of the possibilities of the singularity. Kurzweil, his theories, and the controversies surrounding it were the subject of Barry Ptolemy's documentary Transcendent Man.

In 2007, Eliezer Yudkowsky suggested that many of the different definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.

In 2008, Robin Hanson (taking "singularity" to refer to sharp increases in the exponent of economic growth) lists the Agricultural and Industrial Revolutions as past singularities. Extrapolating from such past events, Hanson proposes that the next economic singularity should increase economic growth between 60 and 250 times. An innovation that allowed for the replacement of virtually all human labor could trigger this event.

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, whose stated mission is "to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges." Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2010, Aubrey de Grey applied the term the "Methuselarity" to the point at which medical technology improves so fast that expected human lifespan increases by more than one year per year. In 2010 in "Apocalyptic AI – Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality" Robert Geraci offers an account of the developing "cyber-theology" inspired by Singularity studies. A book exploring some of those themes is the 1996 Holy Fire by Bruce Sterling, which postulates that a Methuselarity will become a gerontocracy.

In 2011, Kurzweil noted existing trends and concluded that the singularity was becoming more probable to occur around 2045. He told Time magazine: "We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence."

Read more about this topic:  Technological Singularity

Famous quotes containing the words history of the, history of, history and/or idea:

    The history of the past is but one long struggle upward to equality.
    Elizabeth Cady Stanton (1815–1902)

    Throughout the history of commercial life nobody has ever quite liked the commission man. His function is too vague, his presence always seems one too many, his profit looks too easy, and even when you admit that he has a necessary function, you feel that this function is, as it were, a personification of something that in an ethical society would not need to exist. If people could deal with one another honestly, they would not need agents.
    Raymond Chandler (1888–1959)

    The principal office of history I take to be this: to prevent virtuous actions from being forgotten, and that evil words and deeds should fear an infamous reputation with posterity.
    Tacitus (c. 55–c. 120)

    If a theme or idea is too near the surface, the novel becomes simply a tract illustrating an idea.
    Elizabeth Bowen (1899–1973)