The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.
Similar to time complexity, space complexity is often expressed asymptotically in big O notation, such as
etc., where n is a characteristic of the input influencing space complexity.
Analogously to time complexity classes DTIME(f(n)) and NTIME(f(n)), the complexity classes DSPACE(f(n)) and NSPACE(f(n)) are the sets of languages that are decidable by deterministic (respectively, non-deterministic) Turing machines that use space. The complexity classes PSPACE and NPSPACE allow to be any polynomial, analogously to P and NP. That is,
and
The space hierarchy theorem states that, for all space-constructible functions there exists a problem that can be solved by a machine with memory space, but cannot be solved by a machine with asymptotically less than space.
The following containments between complexity classes hold.
Furthermore, Savitch's theorem gives the reverse containment that if
As a direct corollary, This result is surprising because it suggests that non-determinism can reduce the space necessary to solve a problem only by a small amount. In contrast, the exponential time hypothesis conjectures that for time complexity, there can be an exponential gap between deterministic and non-deterministic complexity.
The Immerman–Szelepcsényi theorem states that, again for is closed under complementation. This shows another qualitative difference between time and space complexity classes, as nondeterministic time complexity classes are not believed to be closed under complementation; for instance, it is conjectured that NP ≠ co-NP.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In computational complexity we study the computational resources needed to solve problems and understand the relation between different types of computation.This course advances the students knowle
The course provides students with the tools to approach the study of nonlinear systems and chaotic dynamics. Emphasis is given to concrete examples and numerical applications are carried out during th
L'objectif de ce cours est d'introduire les étudiants à la pensée algorithmique, de les familiariser avec les fondamentaux de l'Informatique et de développer une première compétence en programmation (
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory. In general, a complexity class is defined in terms of a type of computational problem, a model of computation, and a bounded resource like time or memory. In particular, most complexity classes consist of decision problems that are solvable with a Turing machine, and are differentiated by their time or space (memory) requirements.
In computational complexity theory, L (also known as LSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.
Modern integrated circuits are tiny yet incredibly complex technological artifacts, composed of millions and billions of individual structures working in unison.Managing their complexity and facilitating their design drove part of the co-evolution of moder ...
EPFL2024
,
Humans are chronically exposed to airborne microplastics (MPs) by inhalation. Various types of polymer particles have been detected in lung samples, which could pose a threat to human health. Inhalation toxicological studies are crucial for assessing the e ...
The recent geopolitical conflicts in Europe have underscored the vulnerability of the current energy system to the volatility of energy carrier prices. In the prospect of defining robust energy systems ensuring sustainable energy supply in the future, the ...