Summary
AI safety is an interdisciplinary field concerned with preventing accidents, misuse, or other harmful consequences that could result from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to make AI systems moral and beneficial, and AI safety encompasses technical problems including monitoring systems for risks and making them highly reliable. Beyond AI research, it involves developing norms and policies that promote safety. AI researchers have widely different opinions about the severity and primary sources of risk posed by AI technology – though surveys suggest that experts take high consequence risks seriously. In two surveys of AI researchers, the median respondent was optimistic about AI overall, but placed a 5% probability on an “extremely bad (e.g. human extinction)” outcome of advanced AI. In a 2022 survey of the Natural language processing (NLP) community, 37% agreed or weakly agreed that it is plausible that AI decisions could lead to a catastrophe that is “at least as bad as an all-out nuclear war.” Scholars discuss current risks from critical systems failures, bias, and AI enabled surveillance; emerging risks from technological unemployment, digital manipulation, and weaponization; and speculative risks from losing control of future artificial general intelligence (AGI) agents. Some have criticized concerns about AGI, such as Andrew Ng who compared them in 2015 to "worrying about overpopulation on Mars when we have not even set foot on the planet yet." Stuart J. Russell on the other side urges caution, arguing that "it is better to anticipate human ingenuity than to underestimate it." Risks from AI began to be seriously discussed at the start of the computer age: Moreover, if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related publications (1)