**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Time complexity

Summary

In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expressed as a

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (100)

Loading

Loading

Loading

Related people (73)

Related concepts (133)

Computational complexity theory

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each ot

Algorithm

In mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algo

NP-completeness

In computational complexity theory, a problem is NP-complete when:
# It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".

# When the answer is "ye

Related courses (126)

COM-401: Cryptography and security

This course introduces the basics of cryptography. We review several types of cryptographic primitives, when it is safe to use them and how to select the appropriate security parameters. We detail how they work and sketch how they can be implemented.

CS-101: Advanced information, computation, communication I

Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics as diverse as mathematical reasoning, combinatorics, discrete structures & algorithmic thinking.

MGT-418: Convex optimization

This course introduces the theory and application of modern convex optimization from an engineering perspective.

Related units (32)

In 1971, the first microprocessor produced in mass production had 2300 transistor and was able to compute 60'000 operations per second at 740 kHz. Nowadays, even a common gaming console uses a central unit including 243 millions of transistors running at 4 GHz separated in a 9-cores architecture. This exponential evolution following the famous "Moore's Law" is possible thanks to the innovations made in every domain implied in a chip conception and allows chipset creation targeting a more and more wide range of applications. This variety of processors implies a great number of variations in the hardware choices, to match application (general purpose processor, digital signal processor, application-specific processor…), performance requirements and / or other operative constraints such as power consumptions. The applications running on these powerful solutions are growing in complexity exponentially as well, relying on more and more sophisticated software. Consequently the design team is facing a real challenge when designing and implementing an optimal architectural solution. To ease their task, the trend in computer aided design is to develop integrated frameworks for simulation and design, providing all the tools needed during the process, in a top-down approach. These tools allow system specification by means of abstract concurrent entities down to final implementation by means of successive refinements of the different entities and of the communication among then. In parallel, these hardware solutions run applications of a constantly increasing complexity, leading to the need for more and more intensive specification and validation tasks, by means of abstract software descriptions, typically referred to as verification models. These models implement all the functionalities of the application and thus, from the end of this specification phase, the abstract description of the application in the verification model becomes the most reliable reference for the corresponding application. All these reasons make the intuitive understanding of the processing complexity and tend to create redesign loops in the conception process when realizing that the initial architecture choices were sub-optimal (or even wrong), preventing the system to meet the desired performance constraints. In this case, a costly re-design iteration is mandatory, targeting a new platform choice but introducing delay and cost in the design causing the project to arrive too late on the market. Obviously, to eliminate these re-design iterations and to be able to help as soon as possible, the preliminary complexity analysis should be performed before starting the design of the target architecture. Since the first step of the design cycle implies an algorithm description through a verification model, it is desirable to perform the complexity study on this model. Indeed, though considerable efforts are carried out on developing powerful co-design tools, by means high-level abstract descriptions, fast prototyping and flexible design-reusability; no automatic tool is available to help the designer in the first analysis and comprehension phase. Moreover, the size of verification models is growing together with the complexity of the applications, and it is not surprising to have to deal with verification models of tens of thousands of source code lines; consequently, paper-and-pencil and manual-instrumentation analysis methods are neither feasible nor reliable any longer. Furthermore, static complexity evaluations, not taking into account the dynamic dependency of the complexity from the input data, are becoming less and less meaningful for the development of cost-effective solutions. Worst-case analysis is typically suitable for real-time control applications but it cannot be applied in all the cases in which quality-of-service/cost trade-offs are of concern; for instance, a multimedia or signal-processing architecture developed to support the worst-case scenario would simply result to be too expensive and under-exploited for most of the processing time. This dissertation presents a new approach to algorithmic complexity analysis, called the Software Instrumentation Tool (SIT), based on the previous observations and aiming at filling the gap in state of the art tools. The main goal of the SIT is to provide a high-level analysis framework for both a faithful design-oriented evaluation of the computational complexity of verification models and a hardware simulation of the projected architecture. The breakthrough in this version of the SIT is to provide a full set of results about complexity in a complete framework, allowing to use all these data in a methodology able to fill the gap from verification model to hardware implementation or algorithm porting. The analysis is performed at the highest possible abstraction level, i.e. at source code language level, in order to adhere to the same degree of abstraction of verification models; that is, the analysis is not biased by the underlying architecture over which the verification model is tested or by the compilation process. The analysis strictly depends on the input data, in order to take into account the real algorithmic complexity in real working conditions and relies on a completely automatic process, performed directly on the source code of verification models, without forcing the designer to make any manual modification such as source code rewriting or manual instrumentation. SIT approach to complexity analysis is based on the instrumentation by means of C++ operator overloading. SIT analyzes C source code and is capable to instrument, automatically, any legal C data-type or operation or statement, providing eventually an exhaustive complexity analysis at C language level. The tool is building on a core system (a core for the base algorithmic complexity, a second for memory analysis) and supports module extensions (a module for critical path evaluation, a module for data transfer, both can be plugged in the memory core). All the cores and modules are reporting their results in a single visual interface, allowing a quick and efficient complexity overview as well as an in-depth result exploration. As SIT allows extracting meaningful complexity measures directly from the verification models in the preliminary analysis and comprehension phase, the designer benefits from a tool-assisted approach allowing to step to the latest design phases through intermediate tools. The results provided by the first version of the SIT are computational data (what types of operation are performed? On which type of data? ...) and memory usage analysis (how many memory accesses? In which function?). This build of the SIT includes a customizable memory usage analysis, made possible by the simulation of (customizable) virtual memory architectures; this allows to focus the analysis on other aspects of the algorithmic complexity which could not otherwise be estimated by means of static analysis methods, or software profilers, or instruction-level profilers. Another novelty in the SIT is the data-transfer module, allowing both a quick and efficient overview of the main data flows inside the virtual architecture (thanks to a graphical representation of the mains transfers) and a precise in depth analysis of bandwidth and data transfers between functions (thanks to the detailed view included in Sitview). The latest result is the critical path evaluation analysis, allowing to detect the major functions of the execution tree and to extract information about parallelism at different level, from function to algorithm level. All these information are provided through the SIT graphical interface SitView, and a methodology is described to step from the high-level verification model in C to an actor language partioning based on the tool analysis. This partition can be used with dedicated actor languages to create a FPGA code or to create an optimized hardware solution, most probably better than an instinctively created hardware.

Policymaking is a complex process that has been studied using policy process theories almost exclusively. These theories have been built using a large number of qualitative cases. Such methods are useful for theory building but remain limited for theory exploration and policy advice. On a different front, socio-technical system (STS) simulations are often used to test the impact and effectiveness of policies. This is done by using policy scenarios. This manner of dealing with deep uncertainty disregards the dynamic aspect of the policy process and of the way STS interact with policies.
The objective of this dissertation is to present an approach that can be used to systematically model and simulate the policy process. This dissertation demonstrates how such a model can be used in these two widely different applications to explore the policy process theories and to better account for deep uncertainty in STS simulations.
In the first part of the dissertation, I present a common language. It considers four building blocks using concepts from prominent theories as requirements to any model of the policy process: time, the policy arena, actors and the environment, and the actor interactions. Additionally, in this part, I argue why agent-based modelling is best suited to simulate the policy process. Finally, I present the hybrid modelling approach that is used to incorporate the policy process model into already existing STS models.
The second part focuses on the simplest implementation model. This model is the first use of the common language and is entitled simplest implementation as the common language is used to build the simplest model possible capable of emulating the policy process. It is created to demonstrate the potential of the common language and is tested with three different STS. A predation model is used to present the possibilities stemming from this simplest implementation model. It is then added to a model of the Swiss electricity market to demonstrate what benefits the endogenisation of the policy process can have on the study of a complex STS. As a third example, the policy process model is added to a system dynamics model of flood safety in The Netherlands to demonstrate its versatility.
The third part of this dissertation consists of the advocacy coalition framework (ACF) implementation model. In this part, I present a model, also developed using the common language, that is aimed at emulating the ACF as closely as possible. The ACF being a complex framework, this implementation is done in steps of increasing complexity. The first step introduces policy learning to the model, the second introduces coalitions, the third and fourth steps introduce aspects of partial knowledge and imperfect information. This ACF implementation model is also demonstrated using first a predation model and second a model of the Swiss electricity market.
In conclusion, this dissertation shows that it is possible to model the policy process. However it was shown that the policy process theories could benefit from additional and more focused studies so that behaviours and mechanisms can be better understood. This work can be the first step to such studies. It was also shown that introducing the policy process in a STS simulation can be useful to better understand said system. This approach allows for an easy integration to already existing models. Finally, this work can be extended by looking at other theories or different applications.

Given a system with $n > 3t + 1$ processes, where $t$ is the tolerated number of faulty ones, we present a fast asynchronous Byzantine agreement protocol that can reach agreement in $O(t)$ expected running time. This improves the $O(n^2)$ expected running time of Abraham, Dolev, and Halpern [PODC 2008]. Furthermore, if $n = (3 + \varepsilon) t$ for any $\varepsilon > 0$, our protocol can reach agreement in $O (1 / \varepsilon)$ expected running time. This improves the result of Feldman and Micali [STOC 1988] (with constant expected running time when $n > 4 t$).

2015Related lectures (302)