The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.
In 2017 Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization." In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging.
Introduction aux techniques de l'Intelligence Artificielle, complémentée par des exercices de programmation qui montrent les algorithmes et des exemples de leur application à des problèmes pratiques.
Artificial intelligence, big data, and advances in computing power have triggered a technological revolution that may have enormous bearing on the workplace and the labor market. This course provides
This lecture presents ongoing work on how scientific questions can be tackled using machine learning. Machine learning enables extracting knowledge from data computationally and in an automatized way.
Covers knowledge representation in AI, logical inference, and applications in various domains.
Delves into different ethical theories and thought experiments in the context of AI, exploring the challenges of defining AI.
Covers the foundations of artificial intelligence, machine learning, the data revolution, and research examples in AI and machine learning.