In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as n in asymptotic notation). It gives an upper bound on the resources required by the algorithm.
In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size n, and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms.
The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input.
Given a model of computation and an algorithm that halts on each input , the mapping is called the time complexity of if, for every input string , halts after exactly steps.
Since we usually are interested in the dependence of the time complexity on different input lengths, abusing terminology, the time complexity is sometimes referred to the mapping , defined by the maximal complexity
of inputs with length or size .
Similar definitions can be given for space complexity, randomness complexity, etc.
Very frequently, the complexity of an algorithm is given in asymptotic Big-O Notation, which gives its growth rate in the form with a certain real valued comparison function and the meaning:
There exists a positive real number and a natural number such that
Quite frequently, the wording is:
„Algorithm has the worst-case complexity .“
or even only:
„Algorithm has complexity .“
Consider performing insertion sort on numbers on a random access machine. The best-case for the algorithm is when the numbers are already sorted, which takes steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse sorted and it takes steps to sort them; therefore the worst-case time-complexity of insertion sort is of .
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
A first graduate course in algorithms, this course assumes minimal background, but moves rapidly. The objective is to learn the main techniques of algorithm analysis and design, while building a reper
Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a
The students learn the theory and practice of basic concepts and techniques in algorithms. The course covers mathematical induction, techniques for analyzing algorithms, elementary data structures, ma
In computer science, best, worst, and average cases of a given algorithm express what the resource usage is at least, at most and on average, respectively. Usually the resource being considered is running time, i.e. time complexity, but could also be memory or some other resource. Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n.
En algorithmique, la complexité en temps est une mesure du temps utilisé par un algorithme, exprimé comme fonction de la taille de l'entrée. Le temps compte le nombre d'étapes de calcul avant d'arriver à un résultat. Habituellement, le temps correspondant à des entrées de taille n est le temps le plus long parmi les temps d’exécution des entrées de cette taille ; on parle de complexité dans le pire cas. Les études de complexité portent dans la majorité des cas sur le comportement asymptotique, lorsque la taille des entrées tend vers l'infini, et l'on utilise couramment les notations grand O de Landau.
vignette|Représentation d'une recherche linéaire (en violet) face à une recherche binaire (en vert). La complexité algorithmique de la seconde est logarithmique alors que celle de la première est linéaire. L'analyse de la complexité d'un algorithme consiste en l'étude formelle de la quantité de ressources (par exemple de temps ou d'espace) nécessaire à l'exécution de cet algorithme. Celle-ci ne doit pas être confondue avec la théorie de la complexité, qui elle étudie la difficulté intrinsèque des problèmes, et ne se focalise pas sur un algorithme en particulier.
Couvre la complexité, les algorithmes et les preuves du pire cas, y compris l'induction mathématique et la récursion.
Explore la complexité du pire cas, l'induction mathématique, et des algorithmes comme la recherche binaire et le tri d'insertion.
Explore la complexité du pire des cas en informatique et l'importance de la complexité de la vie réelle dans la sélection des algorithmes.
In this thesis we will present and analyze randomized algorithms for numerical linear algebra problems. An important theme in this thesis is randomized low-rank approximation. In particular, we will study randomized low-rank approximation of matrix functio ...
EPFL2024
This thesis demonstrates that it is feasible for systems code to expose a latency interface that describes its latency and related side effects for all inputs, just like the code's semantic interface describes its functionality and related side effects.Sem ...
EPFL2023
We present saft, the first attempt of a static analyzer that extracts the asymptotic function complexity for the Polkadot/Substrate ecosystem, where the burden of accounting for computation resource consumption is put on the developer. saft is a tool meant ...