In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: Lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector space (often ) or free modules (often ) are generally considered.
For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norm usually considered is the Euclidean norm L2. However, other norms (such as Lp) are also considered and show up in a variety of results. Let denote the length of the shortest non-zero vector in the lattice L, that is,
In the SVP, a basis of a vector space V and a norm N (often L2) are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that .
In the γ-approximation version SVPγ, one must find a non-zero lattice vector of length at most for given .
The exact version of the problem is only known to be NP-hard for randomized reductions.
By contrast, the corresponding problem with respect to the uniform norm is known to be NP-hard.
To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time () and memory, and algorithms requiring both exponential time and space () in the lattice dimension.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The goal of the course is to introduce basic notions from public key cryptography (PKC) as well as basic number-theoretic methods and algorithms for cryptanalysis of protocols and schemes based on PKC
The course aims to introduce the basic concepts and results of integer optimization with special emphasis on algorithmic problems on lattices that have proved to be important in theoretical computer s
Statistical learning theory for supervised learning and generalization in PAC and online models (VC theory, MDL/SRM, covering numbers, Radamacher Averages, boosting, compression, stability and connect
In cryptography, post-quantum cryptography (PQC) (sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against a cryptanalytic attack by a quantum computer. The problem with currently popular algorithms is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem.
Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions are currently important candidates for post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems — which could, theoretically, be defeated using Shor's algorithm on a quantum computer — some lattice-based constructions appear to be resistant to attack by both classical and quantum computers.
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where efficiently typically means "in polynomial time"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
Euclidean lattices are mathematical objects of increasing interest in the fields of cryptography and error-correcting codes. This doctoral thesis is a study on high-dimensional lattices with the motivation to understand how efficient they are in terms of b ...
A hash proof system (HPS) is a form of implicit proof of membership to a language. Out of the very few existing post-quantum HPS, most are based on languages of ciphertexts of code-based or lattice-based cryptosystems and inherently suffer from a gap cause ...