**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Category# Theoretical computer science

Summary

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.
It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description:
History of computer science
While logical inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved.
Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory.
With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render some modern public key cryptography algorithms like RSA insecure.
Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below:
Algorithm
An algorithm is a step-by-step procedure for calculations.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (650)

Related courses (189)

Related startups (5)

Related MOOCs (41)

Related people (55)

Related publications (210)

Related units (2)

Related categories (308)

Standard ML

Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular among compiler writers and programming language researchers, as well as in the development of theorem provers. Standard ML is a modern dialect of ML, the language used in the Logic for Computable Functions (LCF) theorem-proving project. It is distinctive among widely used languages in that it has a formal specification, given as typing rules and operational semantics in The Definition of Standard ML.

A* search algorithm

A* (pronounced "A-star") is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance, as well as memory-bounded approaches; however, A* is still the best solution in many cases.

Bell state

The Bell's states or EPR pairs are specific quantum states of two qubits that represent the simplest examples of quantum entanglement; conceptually, they fall under the study of quantum information science. The Bell's states are a form of entangled and normalized basis vectors. This normalization implies that the overall probability of the particle being in one of the mentioned states is 1: . Entanglement is a basis-independent result of superposition.

Related lectures (1,000)

L'objectif de ce cours est d'introduire les étudiants à la pensée algorithmique, de les familiariser avec les fondamentaux de l'Informatique et de développer une première compétence en programmation (

Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a

We teach the fundamental aspects of analyzing and interpreting computer languages, including the techniques to build compilers. You will build a working compiler from an elegant functional language in

Cyberbotics

Active in robotics, simulation and modeling. Cyberbotics provides Webots, an open-source robot simulator since 1998, offering a comprehensive environment for modeling, programming, and simulating robots in various industries, education, and research.

Didel

Active in education, arduino programming and microcontroller resources. Didel specializes in C Arduino programming education and tools, offering a range of products to enhance programming skills and understanding of microcontrollers.

Ligentec

Active in photonics, silicon nitride and integrated circuits. Ligentec is a B2B company manufacturing advanced Photonic Integrated Circuits with low loss and high efficiency, enabling applications in communication, Quantum technologies, LiDAR, and Biosensors.

Parallelism and Concurrency

(merge of parprog1, scala-reactive, scala-spark-big-data)

Functional Programming

In this course you will discover the elements of the functional programming style and learn how to apply them usefully in your daily programming tasks. You will also develop a solid foundation for rea

Functional Programming Principles in Scala [retired]

This advanced undergraduate programming course covers the principles of functional programming using Scala, including the use of functions as values, recursion, immutability, pattern matching, higher-

Code Generation LabCS-320: Computer language processing

Covers generating code for a compiler, translating an Amy program to WebAssembly, including memory management and pattern matching compilation.

Recursive Algorithms: Induction & SortingCS-101: Advanced information, computation, communication I

Explores induction, recursion, and sorting algorithms, including merge sort and the proof of correctness for recursive algorithms.

Class hierarchies: pattern matchingCS-214: Software construction

Covers class hierarchies, pattern matching, function values, and function calls in Scala.

High-level programming languages

In computer science, a high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language elements, be easier to use, or may automate (or even hide entirely) significant areas of computing systems (e.g. memory management), making the process of developing a program simpler and more understandable than when using a lower-level language. The amount of abstraction provided defines how "high-level" a programming language is.

Data structures

In computer science, a data structure is a data organization, management, and storage format that is usually chosen for efficient access to data. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data, i.e., it is an algebraic structure about data. Data structures serve as the basis for abstract data types (ADT). The ADT defines the logical form of the data type. The data structure implements the physical form of the data type.

Quantum computing

A quantum computer is a computer that exploits quantum mechanical phenomena. At small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior, specifically quantum superposition and entanglement, using specialized hardware that supports the preparation and manipulation of quantum states. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer.

The enormous advancements in the ability to detect and manipulate single quantum states have lead to the emerging field of quantum technologies. Among these, quantum computation is the most far-reaching and challenging, aiming to solve problems that the classic computers could never address because of the exponential scaling, while quantum sensing exploits the ability to address single quantum states to realize ultra-sensitive and precise detectors. Defect centers in semiconductors play a primary role in these fields. The possibility to store information in the spin of their ground state, manipulate it through microwaves, and read it optically allows to use them as qubits. In addition, the very sharp dependence of their properties on temperature, strain and magnetic fields makes them very promising quantum sensors. In this Thesis we aim at contributing to the progress of quantum technologies both at the hardware and software level. From a hardware point of view, we study a key property of defect centers in semiconductors, the phonon-assisted luminescence, which can be measured to perform the readout of the information stored in a quantum bit, or to detect temperature variations. We predict the luminescence and study the exciton-phonon couplings within a rigorous many-body perturbation theory framework,an analysis that has never been performed for defect centers.In particular, we study the optical emission of the negatively-charged boron vacancy in 2D hexagonal boron nitride, which currently stands out among defect centers in 2D materials thanks to its promise for applications in quantum information and quantum sensing. We show that phonons are responsible for the observed luminescence, which otherwise would be dark due to symmetry. We also show that the symmetry breaking induced by the static Jahn-Teller effect is not able to describe the presence of the experimentally observed peak at 1.5 eV.The knowledge of the coupling between electrons and phonons is fundamental for the accurate prediction of all the features of the photoluminescence spectrum. However, the large number of atoms in a defect supercell hinders the possibility use density functional perturbation theory to study this coupling. In this work we present a finite-differences technique to calculate the electron-phonon matrix elements, which exploits the symmetries of the defect in such a way to use the very same set of displacement needed for the calculation of phonons. The computation of electron-phonon coupling thus becomes a simple post-processing of the finite-differences phonons calculation. On the quantum software side, we propose an improved quantum algorithm to calculate the Green's function through real-time propagation, and use it to compute the retarded Green's function for the 2-, 3- and 4-site Hubbard models. This novel protocol significantly reduces the number of controlled operations when compared to those previously suggested in literature. Such reduction is quite remarkable when considering the 2-site Hubbard model, for which we show that it is possible to obtain the exact time propagation of the $\ket{N\pm 1}$ states by exponentiating one single Pauli component of the Hamiltonian, allowing us to perform the calculations on an actual superconducting quantum processor.

The hardware complexity of modern machines makes the design of adequate programming models crucial for jointly ensuring performance, portability, and productivity in high-performance computing (HPC). Sequential task-based programming models paired with advanced runtime systems allow the programmer to write a sequential algorithm independently of the hardware architecture in a productive and portable manner, and let a third party software layer -the runtime system- deal with the burden of scheduling a correct, parallel execution of that algorithm to ensure performance. Many HPC algorithms have successfully been implemented following this paradigm, as a testimony of its effectiveness. Developing algorithms that specifically require fine-grained tasks along this model is still considered prohibitive, however, due to per-task management overhead [1], forcing the programmer to resort to a less abstract, and hence more complex "task+X" model. We thus investigate the possibility to offer a tailored execution model, trading dynamic mapping for efficiency by using a decentralized, conservative in-order execution of the task flow, while preserving the benefits of relying on the sequential taskbased programming model. We propose a formal specification of the execution model as well as a prototype implementation, which we assess on a shared-memory multicore architecture with several synthetic workloads. The results show that under the condition of a proper task mapping supplied by the programmer, the pressure on the runtime system is significantly reduced and the execution of fine-grained task flows is much more efficient.

As it has become easier and cheaper to collect big datasets in the last few decades, designing efficient and low-cost algorithms for these datasets has attracted unprecedented attention. However, in most applications, even storing datasets as acquired has become extremely costly and inefficient, which motivates the study of sublinear algorithms. This thesis focuses on studying two fundamental graph problems in the sublinear regime. Furthermore, it presents a fast kernel density estimation algorithm and data structure. The first part of this thesis focuses on graph spectral sparsification in dynamic streams. Our algorithm achieves almost optimal runtime and space simultaneously in a single pass. Our method is based on a novel bucketing scheme that enables us to recover high effective resistance edges faster. This contribution presents a novel approach to the effective resistance embedding of the graph, using locality-sensitive hash functions, with possible further future applications.The second part of this thesis presents spanner construction results in the dynamic streams and the simultaneous communication models. First, we show how one can construct a $\tilde{O}(n^{2/3})$-spanner using the above-mentioned almost-optimal single-pass spectral sparsifier, resulting in the first single-pass algorithm for non-trivial spanner construction in the literature. Then, we generalize this result to constructing $\tilde{O}(n^{2/3(1-\alpha)})$-spanners using $\tilde{O}(n^{1+\alpha})$ space for any $\alpha \in [0,1]$, providing a smooth trade-off between distortion and memory complexity. Moreover, we study the simultaneous communication model and propose a novel protocol with low per player information. Also, we show how one can leverage more rounds of communication in this setting to achieve better distortion guarantees. Finally, in the third part of this thesis, we study the kernel density estimation problem. In this problem, given a kernel function, an input dataset imposes a kernel density on the space. The goal is to design fast and memory-efficient data structures that can output approximations to the kernel density at queried points. This thesis presents a data structure based on the classical near neighbor search and locality-sensitive hashing techniques that improves or matches the query time and space complexity for radial kernels considered in the literature. The approach is based on an implementation of (approximate) importance sampling for each distance range and then using near neighbor search algorithms to recover points from these distance ranges. Later, we show how to improve the runtime, using recent advances in the data-dependent near neighbor search data structures, for a class of radial kernels that includes the Gaussian kernel.