In cryptography, Learning with errors (LWE) is a mathematical problem that is widely used in cryptography to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.
More precisely, the LWE problem is defined as follows. Let denote the ring of integers modulo and let
denote the set of -vectors over . There exists a certain unknown linear function , and the input to the LWE problem is a sample of pairs , where and , so that with high probability . Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function , or some close approximation thereof, with high probability.
The LWE problem was introduced by Oded Regev in 2005 (who won the 2018 Gödel Prize for this work), it is a generalization of the parity learning problem. Regev showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems, such as the ring learning with errors key exchange by Peikert.
Denote by the additive group on reals modulo one.
Let be a fixed vector.
Let be a fixed probability distribution over .
Denote by the distribution on obtained as follows.
Pick a vector from the uniform distribution over ,
Pick a number from the distribution ,
Evaluate , where is the standard inner product in , the division is done in the field of reals (or more formally, this "division by " is notation for the group homomorphism mapping to ), and the final addition is in .
Output the pair .
The learning with errors problem is to find , given access to polynomially many samples of choice from .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The goal of the course is to introduce basic notions from public key cryptography (PKC) as well as basic number-theoretic methods and algorithms for cryptanalysis of protocols and schemes based on PKC
The course aims to introduce the basic concepts and results of integer optimization with special emphasis on algorithmic problems on lattices that have proved to be important in theoretical computer s
This advanced course will provide students with the knowledge to tackle the design of privacy-preserving ICT systems. Students will learn about existing technologies to prect privacy, and how to evalu
In cryptography, post-quantum cryptography (PQC) (sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against a cryptanalytic attack by a quantum computer. The problem with currently popular algorithms is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem.
Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions are currently important candidates for post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems — which could, theoretically, be defeated using Shor's algorithm on a quantum computer — some lattice-based constructions appear to be resistant to attack by both classical and quantum computers.
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where efficiently typically means "in polynomial time"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
An oblivious linear function evaluation protocol, or OLE, is a two-party protocol for the function f (x) = ax + b, where a sender inputs the field elements a, b, and a receiver inputs x and learns f (x). OLE can be used to build secret-shared multiplicatio ...
We propose a 2-round blind signature protocol based on the random oracle heuristic and the hardness of standard lattice problems (Ring/Module-SIS/LWE and NTRU) with a signature size of 22 KB. The protocol is round-optimal and has a transcript size that can ...
We prove that Kilian's four-message succinct argument system is post-quantum secure in the standard model when instantiated with any probabilistically checkable proof and any collapsing hash function (which in turn exist based on the post-quantum hardness ...