**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Bringing Theory Closer to Practice in Post-quantum and Leakage-resilient Cryptography

Résumé

Modern cryptography pushed forward the need of having provable security. Whereas ancient cryptography was only relying on heuristic assumptions and the secrecy of the designs, nowadays researchers try to make the security of schemes to rely on mathematical problems which are believed hard to solve. When doing these proofs, the capabilities of potential adversaries are modeled formally. For instance, the black-box model assumes that an adversary does not learn anything from the inner-state of a construction. While this assumption makes sense in some practical scenarios, it was shown that one can sometimes learn some information by other means, e.g., by timing how long the computation take. In this thesis, we focus on two different areas of cryptography. In both parts, we take first a theoretical point of view to obtain a result. We try then to adapt our results so that they are easily usable for implementers and for researchers working in practical cryptography. In the first part of this thesis, we take a look at post-quantum cryptography, i.e., at cryptographic primitives that are believed secure even in the case (reasonably big) quantum computers are built. We introduce HELEN, a new public-key cryptosystem based on the hardness of the learning from parity with noise problem (LPN). To make our results more concrete, we suggest some practical instances which make the system easily implementable. As stated above, the design of cryptographic primitives usually relies on some well-studied hard problems. However, to suggest concrete parameters for these primitives, one needs to know the precise complexity of algorithms solving the underlying hard problem. In this thesis, we focus on two recent hard-problems that became very popular in post-quantum cryptography: the learning with error (LWE) and the learning with rounding problem (LWR). We introduce a new algorithm that solves both problems and provide a careful complexity analysis so that these problems can be used to construct practical cryptographic primitives. In the second part, we look at leakage-resilient cryptography which studies adversaries able to get some side-channel information from a cryptographic primitive. In the past, two main disjoint models were considered. The first one, the threshold probing model, assumes that the adversary can put a limited number of probes in a circuit. He then learns all the values going through these probes. This model was used mostly by theoreticians as it allows very elegant and convenient proofs. The second model, the noisy-leakage model, assumes that every component of the circuit leaks but that the observed signal is noisy. Typically, some Gaussian noise is added to it. According to experiments, this model depicts closely the real behaviour of circuits. Hence, this model is cherished by the practical cryptographic community. In this thesis, we show that making a proof in the first model implies a proof in the second model which unifies the two models and reconciles both communities. We then look at this result with a more practical point-of-view. We show how it can help in the process of evaluating the security of a chip based solely on the more standard mutual information metric.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (29)

Cryptographie

thumb|La machine de Lorenz utilisée par les nazis durant la Seconde Guerre mondiale pour chiffrer les communications militaires de haut niveau entre Berlin et les quartiers-généraux des différentes ar

Cryptographie asymétrique

vignette|320x320px|Schéma du chiffrement asymétrique: une clé sert à chiffrer et une seconde à déchiffrer
La cryptographie asymétrique, ou cryptographie à clé publique est un domaine relativement réce

Sûreté

En politique, la sûreté est la protection contre le pouvoir ou la violence, le danger ou les menaces. Plus particulièrement, dans la déclaration des Droits de l'homme et du citoyen de 1789, la sûreté

Publications associées (102)

Chargement

Chargement

Chargement

In 1984, C.H. Bennet and G. Brassard proposed a new protocol aimed to solve the problem of symmetric cryptographic key exchange. This protocol was called BB84 after the name of its authors. While a traditional method would rely on public key cryptography (like RSA), the BB84 protocol takes benefit of the laws of quantum mechanics, like for example the fact that any quantum measurement can perturb the system. Traditional public key algorithms security often rely on a typical hard mathematical problem. It is well known for example that the ability to factorize easily any number would make the usage of RSA completely insecure. Quantum Key Exchange (QKE) protocols security cannot be proved in a similar way. In this work, we will try to give an overview of security proofs of quantum key exchange protocols, focusing on the BB84 protocol.

2003The security of public-key cryptography relies on well-studied hard problems, problems for which we do not have efficient algorithms. Factorization and discrete logarithm are the two most known and used hard problems. Unfortunately, they can be easily solved on a quantum computer by Shor's algorithm. Also, the research area of cryptography demands for crypto-diversity which says that we should offer a range of hard problems for public-key cryptography. If one hard problem proves to be easy, we should be able to provide alternative solutions. Some of the candidates for post-quantum hard problems, i.e. problems which are believed to be hard even on a quantum computer, are the Learning Parity with Noise (LPN), the Learning with Errors (LWE) and the Shortest Vector Problem (SVP). A thorough study of these problems is needed in order to assess their hardness. In this thesis we focus on the algorithmic study of LPN. LPN is a hard problem that is attractive, as it is believed to be post-quantum resistant and suitable for lightweight devices. In practice, it has been employed in several encryption schemes and authentication protocols. At the beginning of this thesis, we take a look at the existing LPN solving algorithms. We provide the theoretical analysis that assesses their complexity. We compare the theoretical results with practice by implementing these algorithms. We study the efficiency of all LPN solving algorithms which allow us to provide secure parameters that can be used in practice. We push further the state of the art by improving the existing algorithms with the help of two new frameworks. In the first framework, we split an LPN solving algorithm into atomic steps. We study their complexity, how they impact the other steps and we construct an algorithm that optimises their use. Given an LPN instance that is characterized by the noise level and the secret size, our algorithm provides the steps to follow in order to solve the instance with optimal complexity. In this way, we can assess if an LPN instance provides the security we require and we show what are the secure instances for the applications that rely on LPN. The second framework handles problems that can be decomposed into steps of equal complexity. Here, we assume that we have an adversary that has access to a finite or infinite number of instances of the same problem. The goal of the adversary is to succeed in just one instance as soon as possible. Our framework provides the strategy that achieves this. We characterize an LPN solving algorithm in this framework and show that we can improve its complexity in the scenario where the adversary is restricted. We show that other problems, like password guessing, can be modeled in the same framework.

In this dissertation, we study the security of cryptographic protocols and cryptosystems from the mathematical definition of the primitives, up to their physical implementations in the real world. We propose a representation of the chronological design using six layers (cryptographic primitives, cryptographic protocols, implementation, computer insecurity, side channel cryptanalysis and computer human interactions). We do the assumption that these layers should not be studied independently. Indeed, many negligible security weaknesses coming from different layers can be correlated to provide devastating practical attacks on cryptosystems. However, the complexity of a complete security analysis becomes huge and interdisciplinary knowledge is needed. These limitations are probably the reasons of the lack of complete security analysis in practice. We define a novel approach, to combine and study the six layers simultaneously. We propose to follow the data flow of a system and to perform security analysis across the six layers. This technique is applied in practice to the security analysis of computer keyboards, RC4, IEEE 802.11, and e-passports. Thanks to this method, we found 34 additional exploitable correlations in RC4 and we defined the best key recovery attacks on WEP and WPA. We also identified weaknesses in the design and the implementation of e-passports. Therefore, we show that the security risk of every layer seems to be related to its level of complexity. Thus, the implementation layer, the computer insecurity layer, the side channel layer and the computer human interfaces layer are subject to cost-effective attacks in practice. Interestingly, these layers are not intensively studied in cryptography, where research stays usually focused on the two first layers (and some side channel attacks). In this dissertation, we also propose frameworks for computer aided cryptanalysis. Indeed, when the complexity of a system is too important to perform manual analysis, some tools may automatically find weaknesses. Increasing complexity in systems adds new vulnerabilities. Straightforward but automated analysis becomes relevant. Two frameworks have been developed. The first one automatically highlights linear correlation in RC4. The second framework, called Autodafé automatically detects buffer overflows in modern software, using a technique called Fuzzing by Weighting Attacks with Markers.