Concept# Algorithmic efficiency

Summary

In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important.
For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort sorts the list in time proportional to the number of elements squared (O(n^2), see Big O notation), but only requires a sm

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (100)

Loading

Loading

Loading

Related people (26)

Related lectures (27)

Related concepts (34)

Self-modifying code

In computer science, self-modifying code (SMC or SMoC) is code that alters its own instructions while it is executing – usually to reduce the instruction path length and improve performance or simply

Computer programming

Computer programming is the process of performing particular computations (or more generally, accomplishing specific computing results), usually by designing and building executable computer programs.

Optimizing compiler

In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution tim

Related courses (22)

CS-250: Algorithms

The students learn the theory and practice of basic concepts and techniques in algorithms. The course covers mathematical induction, techniques for analyzing algorithms, elementary data structures, major algorithmic paradigms such as dynamic programming, sorting and searching, and graph algorithms.

EE-530: Test of VLSI systems

Test of VLSI Systems covers theoretical knowledge related to the major algorithms used in VLSI test, and design for test techniques. Basic knowledge related to computer-aided design for test techniques, and their integration into a design-flow are presented.

CS-233(a): Introduction to machine learning (BA3)

Machine learning and data analysis are becoming increasingly central in many sciences and applications. In this course, fundamental principles and methods of machine learning will be introduced, analyzed and practically implemented.

Related units (16)

With the increasing strength and frequency of climate change events, the urgency to mitigate climate change impact is ever so important. Canada reported his Nationally Determined Contribution (NDC) and submitted its ambition to reduce his Greenhouse Gas (GHG) emissions by 40-45% by 2030 compared to 2005 and reach net-zero emissions by 2050. Interest for energy system modelling has been increasing for planning future energy systems, as a result of growing concern for sustainable development and transition towards renewable energies. Nevertheless, planning a decarbonization of energy system can potentially lead to environmental burden shifting, and can increase impacts on other environmental impacts such as ecosystem quality or human health. Thus, this project aims towards integrating Life Cycle Analysis (LCA) and more specifically Life Cycle Impact Assessment (LCIA) within Energy Systems Modelling, in order to minimize not only climate change impacts, but human health and ecosystem quality. In addition, a dual spatial resolution was integrated in the model in order to increase data precision and computational efficiency. The results show that a low-carbon energy system, based on the optimization of GHG emissions allows to greatly reduce all the environmental impacts under analysis, compared to an all economic- based energy system, but a multi-objective optimization allows to simultaneously reduce impacts on ecosystem quality and human health, while reducing GHG emissions and keeping good economical and technical performances. The results also show that it is theoretically possible for Canada to keep on track with their emission reduction ambition by 2030, by deploying more renewable energies. This project does not allow assessing Canada’s potential to reach net-zero emissions, as environmental impacts from carbon capture technologies are still to be characterized and integrated in the model.

2022This thesis focuses on non-parametric covariance estimation for random surfaces, i.e.~functional data on a two-dimensional domain. Non-parametric covariance estimation lies at the heart of functional data analysis, andconsiderations of statistical and computational efficiency often compel the use of separability of the covariance, when working with random surfaces. We seek to provide efficient alternatives to this ambivalent assumption.In Chapter 2, we study a setting where the covariance structure may fail to be separable locally -- either due to noise contamination or due to the presence of a non-separable short-range dependent signal component. That is, the covariance is an additive perturbation of a separable component by a non-separable but banded component. We introduce non-parametric estimators hinging on shifted partial tracing -- a novel concept enjoying strong denoising properties. We illustrate the usefulness of the proposed methodology on a data set of mortality surfaces.In Chapter 3, we propose a distinctive decomposition of the covariance, which allows us to understand separability as an unconventional form of low-rankness. From this perspective, a separable covariance has rank one. Allowing for a higher rank suggests a structured class in which any covariance can be approximated up to an arbitrary precision. The key notion of the partial inner product allows us to generalize the power iteration method to general Hilbert spaces and estimate the aforementioned decomposition from data. Truncation and retention of the leading terms automatically induces a non-parametric estimator of the covariance, whose parsimony is dictated by the truncation level. Advantages of this approach, allowing for estimation beyond separability, are demonstrated on the task of classification of EEG signals.While Chapters 2 and 3 propose several generalizations of separability in the densely sampled regime, Chapter 4 deals with the sparse regime, where the latent surfaces are observed only at few irregular locations. Here, a separable covariance estimator based on local linear smoothers is proposed, which is the first non-parametric utilization of separability in the sparse regime. The assumption of separability reduces the intrinsically four-dimensional smoothing problem into several two-dimensional smoothers and allows the proposed estimator to retain the classical minimax-optimal convergence rate for two-dimensional smoothers. The proposed methodology is used for a qualitative analysis of implied volatility surfaces corresponding to call options, and for prediction of the latent surfaces based on information from the entire data set, allowing for uncertainty quantification. Our quantitative results show that the proposed methodology outperforms the common approach of pre-smoothing every implied volatility surface separately.Throughout the thesis, we put emphasis on computational aspects, since those are the main reason behind the immense popularity of separability. We show that the covariance structures of Chapters 2 and 3 come with no (asymptotic) computational overhead relative to assuming separability. In fact, the proposed covariance structures can be estimated and manipulated with the same asymptotic costs as the separable model. In particular, we develop numerical algorithms that can be used for efficient inversion, as required e.g.~for prediction. All the methods are implemented in R and available on~GitHub.

Zhe Chen, Mario Paolone, Zhaoyang Wang

The classical electromagnetic time reversal (EMTR) fault location method in power systems can be time consuming, especially when a high location accuracy is desired. To cope with this issue, the concept of EMTR in mismatched media has recently been introduced, substantially improving the fault location efficiency. In this paper, we present a theoretical study and rigorous demonstration of the mismatched-media-based mirrored minimum energy property. Firstly, we infer a direct-reversed-time transfer function and present a theorem according to which, at the fault switching frequency and its odd harmonics, the mirror-image point of the fault location with respect to the line center corresponds to a local minimum of the squared modulus of the transfer function. Next, it is proved that the mirrored minimum energy property is a corollary of this theorem. Based on these theoretical findings, we propose an algorithm that uses the reversed-time voltage energy as a fault location metric in the frequency domain, instead of the original time-domain approach. We further propose applying a data-driven strategy to maximize the computation efficiency of the algorithm. The applicability and robustness of the frequency-domain fault location metric, together with the computational efficiency of the accelerating algorithm, are numerically and experimentally validated.

2021