Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.
Before the 21st century the ethics of machines had largely been the subject of science fiction literature, mainly due to computing and artificial intelligence (AI) limitations. Although the definition of "Machine Ethics" has evolved since, the term was coined by Mitchell Waldrop in the 1987 AI Magazine article "A Question of Responsibility":"However, one thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics."
In 2004, Towards Machine Ethics was presented at the AAAI Workshop on Agent Organizations: Theory and Practice in which theoretical foundations for machine ethics were laid out.
It was in the AAAI Fall 2005 Symposium on Machine Ethics where researchers met for the first time to consider implementation of an ethical dimension in autonomous systems. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics" that stems from the AAAI Fall 2005 Symposium on Machine Ethics.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The students will understand the cognitive and social factors which affect learning - particularly in science and engineering. They will be able to use social research techniques as part of the design
We will define the concept of personalized health, describe the underlying technologies, the technological, legal and ethical challenges that the field faces today, and how they are being met.
This master course enables students to sharpen their proficiency in tackling ethical and legal challenges linked to Artificial Intelligence (AI). Students acquire the competence to define AI and ident
In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards humans' intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues some objectives, but not the intended ones. It can be challenging for AI designers to align an AI system because it can be difficult for them to specify the full range of desired and undesired behaviors.
Artificial intelligence in healthcare is an overarching term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.
Eliezer S. Yudkowsky (ˌɛliˈɛzər_ˌjʌdˈkaʊski ; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California.
Driven by the demand for real-time processing and the need to minimize latency in AI algorithms, edge computing has experienced remarkable progress. Decision-making AI applications stand out for their heavy reliance on data-centric operations, predominantl ...
Harnessing the power of machine learning (ML) and other Artificial Intelligence (AI) techniques promises substantial improvements across forensic psychiatry, supposedly offering more objective evaluations and predictions. However, AI-based predictions abou ...
Provides an in-depth exploration of AI ethics regulation, covering legal questions, EU regulation, GDPR, UNESCO texts, and the proposal for AI regulation.
Can social, organisations and disciplinary values guide decisions when thinking about ethical challenges? With this presentation, I describe the ambivalence of decision-making when it comes to being a cause or a valuable resource for acting upon ethics cha ...