The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
The first person to use the concept of a "singularity" in the technological context was the 20th-century Hungarian-American mathematician John von Neumann. Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors have echoed this viewpoint.
The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole", and later in his 1993 essay The Coming Technological Singularity, in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030. Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045.
Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Artificial intelligence, big data, and advances in computing power have triggered a technological revolution that may have enormous bearing on the workplace and the labor market. This course provides
L'objectif de ce séminaire est d'amener les étudiants à réfléchir aux enjeux éthiques que les nouvelles technologies peuvent soulever, parmi lesquels leur incompatibilité avec l'autonomie, la liberté
The Transformative Projects (TP) aim to encourage the students to develop hands-on skills in industry. It reinforced project management and team work skills to solve practical issues at the crossroad
Raymond Kurzweil (ˈkɜrzwaɪl ; born February 12, 1948) is an American computer scientist, author, inventor, and futurist. He is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism.
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.
Delves into the Montreux Jazz Digital Project, covering heritage collection transformation, innovation, immersive experiences, and ongoing research projects.
Using Iran's unexpected flood in April 2019 as a natural experiment, we show that local branches bridge the time gap between the disaster and governmental aids by immediately increasing their lending for two months following the flood. Analyzing proprietar ...
Current technological improvements are yet to put the world on track to net-zero, which will require the uptake of transformative low-carbon innovations to supplement mitigation efforts. However, the role of such innovations is not yet fully understood; so ...
2023
, ,
Cyclic peptides combine a number of favorable properties that make them attractive for drug development. Today, more than 40 therapeutics based on cyclic peptides are in use, and new, powerful technologies for their development suggest that this number cou ...