**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# A 3D indicator for guiding AI applications in the energy sector

Résumé

The utilisation of Artificial Intelligence (AI) applications in the energy sector is gaining momentum, with increasingly intensive search for suitable, high-quality and trustworthy solutions that displayed promising results in research. The growing interest comes from decision makers of both the industry and policy domains, searching for applications to increase companies’ profitability, raise efficiency and facilitate the energy transition. This paper aims to provide a novel three-dimensional (3D) indicator for AI applications in the energy sector, based on their respective maturity level, regulatory risks and potential benefits. Case studies are used to exemplify the application of the 3D indicator, showcasing how the developed framework can be used to filter promising AI applications eligible for governmental funding or business development. In addition, the 3D indicator is used to rank AI applications considering different stakeholder preferences (risk-avoidance, profit-seeking, balanced). These results allow AI applications to be better categorised in the face of rapidly emerging national and intergovernmental AI strategies and regulations that constrain the use of AI applications in critical infrastructures.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Publications associées (3)

Chargement

Chargement

Chargement

Concepts associés (23)

Intelligence artificielle

vignette|redresse=0.8|Les assistants personnels intelligents sont l'une des applications concrètes de l'intelligence artificielle dans les années 2010.
L'intelligence artificielle (IA) est un ensembl

Prise de décision

vignette|Lorsqu'il s'agit de prendre une décision, il est bon de savoir que des situations différentes nécessitent une approche différente. Il n'y a pas de façon unique de penser/d'agir. la plupart du

Étude de cas

L’étude de cas est une méthode utilisée dans les études qualitatives en sciences humaines et sociales, en psychologie ou en psychanalyse, mais elle peut être utilisée dans les études pour se pencher s

In this dissertation, I develop theory and evidence to argue that new technologies are central to how firms organize to create and capture value. I use computational methods such as reinforcement learning and probabilistic topic modeling to investigate three topics: the automation of routines, the organization of artificial intelligence (AI), and the evaluation of technology risk. Overall, I argue that new technologies are not a panacea for the firm but require deliberate strategic planning to manage the potential downsides of myopic automation, AI interdependencies, and the disclosure of technology risks.In the first essay, I argue that while automation can increase productivity by reducing the costs of coordinating individuals, the automation of routines can also incur an indirect opportunity cost due to slow adaptation to environmental change. I develop a reinforcement learning simulation to model the impact of automation on the returns from the division of labor in dynamic environments and to show how automation incurs opportunity costs through lost learning and slow adaptation. Moreover, automation can be suboptimal when it brings about myopic behavior, i.e. high returns from the division of labor in the short term, but negative returns in the long term. Given the simulation results, I argue that firms need dynamic routines to simultaneously balance learning and automation. I open-source the simulation platform as OrgSim-RL on GitHub.In the second essay, I argue that a data-driven culture - what I define as a Data Clan - can help to coordinate complex interdependencies between AI components within a firm. I analyze in-depth semi-structured interview data with a hierarchical stochastic block model (hSBM) and hand-coding to find that managers focus primarily on building a strong culture and establishing high-quality data assets when allocating resources to AI initiatives. Given the results, I inductively develop implications for theory and argue that the emergence of a Data Clan can be a governance mechanism to reduce coordination frictions and build a competitive advantage in the age of AI.In the third essay, I argue that investors require a higher initial return to take on more technology risk disclosure during an IPO. I quantify the magnitude of disclosed risk and the risk disclosure topics based on a latent Dirichlet allocation (LDA) topic model of IPO prospectus text and find a return-for-risk association between text-based technology risk disclosure and underpricing. The study also finds evidence that owning granted patents is associated with a lower return-for-risk association, suggesting that intellectual property allows the disclosure of risk without losing the competitive advantage. I open-source the code for quantifying risk disclosure as RiskyData-LDA on GitHub.In summary, this dissertation develops theory and finds evidence across three essays to argue that leveraging new technologies requires deliberate strategic planning to manage potential downsides of new technologies, such as the opportunity costs of automation, coordination costs, and costs associated with raising capital. The results suggest three mitigating solutions: dynamic routines to balance learning and automation, a Data Clan to improve coordination, and disclosure through patents to reduce underpricing.

Machine Learning is a modern and actively developing field of computer science, devoted to extracting and estimating dependencies from empirical data. It combines such fields as statistics, optimization theory and artificial intelligence. In practical tasks, the general aim of Machine Learning is to construct algorithms able to generalize and predict in previously unseen situations based on some set of examples. Given some finite information, Machine Learning provides ways to exract knowledge, describe, explain and predict from data. Kernel Methods are one of the most successful branches of Machine Learning. They allow applying linear algorithms with well-founded properties such as generalization ability, to non-linear real-life problems. Support Vector Machine is a well-known example of a kernel method, which has found a wide range of applications in data analysis nowadays. In many practical applications, some additional prior knowledge is often available. This can be the knowledge about the data domain, invariant transformations, inner geometrical structures in data, some properties of the underlying process, etc. If used smartly, this information can provide significant improvement to any data processing algorithm. Thus, it is important to develop methods for incorporating prior knowledge into data-dependent models. The main objective of this thesis is to investigate approaches towards learning with kernel methods using prior knowledge. Invariant learning with kernel methods is considered in more details. In the first part of the thesis, kernels are developed which incorporate prior knowledge on invariant transformations. They apply when the desired transformation produce an object around every example, assuming that all points in the given object share the same class. Different types of objects, including hard geometrical objects and distributions are considered. These kernels were then applied for images classification with Support Vector Machines. Next, algorithms which specifically include prior knowledge are considered. An algorithm which linearly classifies distributions by their domain was developed. It is constructed such that it allows to apply kernels to solve non-linear tasks. Thus, it combines the discriminative power of support vector machines and the well-developed framework of generative models. It can be applied to a number of real-life tasks which include data represented as distributions. In the last part of the thesis, the use of unlabelled data as a source of prior knowledge is considered. The technique of modelling the unlabelled data with a graph is taken as a baseline from semi-supervised manifold learning. For classification problems, we use this apporach for building graph models of invariant manifolds. For regression problems, we use unlabelled data to take into account the inner geometry of the input space. To conclude, in this thesis we developed a number of approaches for incorporating some prior knowledge into kernel methods. We proposed invariant kernels for existing algorithms, developed new algorithms and adapted a technique taken from semi-supervised learning for invariant learning. In all these cases, links with related state-of-the-art approaches were investigated. Several illustrative experiments were carried out on real data on optical character recognition, face image classification, brain-computer interfaces, and a number of benchmark and synthetic datasets.

Machine Learning is a modern and actively developing field of computer science, devoted to extracting and estimating dependencies from empirical data. It combines such fields as statistics, optimization theory and artificial intelligence. In practical tasks, the general aim of Machine Learning is to construct algorithms able to generalize and predict in previously unseen situations based on some set of examples. Given some finite information, Machine Learning provides ways to exract knowledge, describe, explain and predict from data. Kernel Methods are one of the most successful branches of Machine Learning. They allow applying linear algorithms with well-founded properties such as generalization ability, to non-linear real-life problems. Support Vector Machine is a well-known example of a kernel method, which has found a wide range of applications in data analysis nowadays. In many practical applications, some additional prior knowledge is often available. This can be the knowledge about the data domain, invariant transformations, inner geometrical structures in data, some properties of the underlying process, etc. If used smartly, this information can provide significant improvement to any data processing algorithm. Thus, it is important to develop methods for incorporating prior knowledge into data-dependent models. The main objective of this thesis is to investigate approaches towards learning with kernel methods using prior knowledge. Invariant learning with kernel methods is considered in more details. In the first part of the thesis, kernels are developed which incorporate prior knowledge on invariant transformations. They apply when the desired transformation produce an object around every example, assuming that all points in the given object share the same class. Different types of objects, including hard geometrical objects and distributions are considered. These kernels were then applied for images classification with Support Vector Machines. Next, algorithms which specifically include prior knowledge are considered. An algorithm which linearly classifies distributions by their domain was developed. It is constructed such that it allows to apply kernels to solve non-linear tasks. Thus, it combines the discriminative power of support vector machines and the well-developed framework of generative models. It can be applied to a number of real-life tasks which include data represented as distributions. In the last part of the thesis, the use of unlabelled data as a source of prior knowledge is considered. The technique of modelling the unlabelled data with a graph is taken as a baseline from semi-supervised manifold learning. For classification problems, we use this apporach for building graph models of invariant manifolds. For regression problems, we use unlabelled data to take into account the inner geometry of the input space. To conclude, in this thesis we developed a number of approaches for incorporating some prior knowledge into kernel methods. We proposed invariant kernels for existing algorithms, developed new algorithms and adapted a technique taken from semi-supervised learning for invariant learning. In all these cases, links with related state-of-the-art approaches were investigated. Several illustrative experiments were carried out on real data on optical character recognition, face image classification, brain-computer interfaces, and a number of benchmark and synthetic datasets.