**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Automated Side-Channel Vulnerability Discovery and Hardening

Résumé

In traditional cryptography, an attacker tries to infer a mathematical relationship between the inputs and outputs of a cryptosystem to recover secret information. With the advances in the theoretical basis of the cryptographic algorithms, this task became harder and attackers started to seek different approaches. A family of attacks known as side-channel attacks have focused on using information leaked through the underlying device when the cryptographic algorithm is running. For instance, a power analysis attack can exploit the relationship between the inputs of a cryptosystem and the underlying device’s power consumption while performing cryptographic operations on these inputs. Such attacks have shown to be so successful and efficient in practice that prudent designers now insert countermeasures against these attacks to their hardware and software systems. However, the insertion process is challenging to a non-expert in cryptography due to several factors including unnatural structure of the countermeasures (e.g., obfuscating the implementation), use of non-standard elements in the design (e.g., using non-CMOS logic styles), conflict with standard design parameters and the optimization processes of design tools (e.g., adding dummy operations, which are normally eliminated by the design tools to increase performance), etc. To facilitate a reliably-secure design process, this thesis proposes automated methodologies which analyze a given hardware or software cryptosystem and insert appropriate side-channel countermeasures. We first propose one type of hardware countermeasure and show how it can easily be integrated into the standard electronic design automation flow to protect high-level hardware implementations. The countermeasure is based on adding random jitter to the clocks of sequential circuit elements, and incurs a modest area and energy overhead. Next, we propose a hardware extension unit, an instruction shuffler, to existing processors. The unit is very lightweight and does not require any architectural changes, and hence can be used with any processor, increasing the side-channel resistance of the overall system. We then present a compiler, which can easily be combined with the off-the-shelf compilers, to automatically apply countermeasures on given software implementations. We show that the compiler can produce protected implementations that are as efficient as their manually optimized counterparts, eliminating the need for designer expertise and time. Finally, we present an automated security verification methodology, which checks certain properties to detect potential vulnerabilities in a (manually or automatically) protected implementation. Our experiments show that we can successfully detect common security problems in a flawed implementation of a countermeasure within a reasonable amount of time.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (12)

Information

vignette|redresse=0.6|Pictogramme représentant une information.
L’information est un de la discipline des sciences de l'information et de la communication (SIC). Au sens étymologique, l'« informatio

Attaque par canal auxiliaire

Dans le domaine de la sécurité informatique, une attaque par canal auxiliaire ( ou SCA) est une attaque informatique qui, sans remettre en cause la robustesse théorique des méthodes et procédures de

Jeu d'instructions x86

Le jeu d'instructions du x86 a subi de nombreux changements au cours du temps. La plupart d'entre eux ne sont que des ajouts au jeu d'instructions initial afin d'apporter de nouvelles fonctionnalité

Publications associées (3)

Chargement

Chargement

Chargement

Recent advances in data processing and communication systems have led to a continuous increase in the amount of data communicated over today’s networks. These large volumes of data pose new challenges on the current networking infrastructure that only offers a best effort mechanism for data delivery. The emergence of new distributed network architectures, such as peer-to-peer networks and wireless mesh networks, and the need for efficient data delivery mechanisms have motivated researchers to reconsider the way that information is communicated and processed in the networks. This has given rise to a new research field called network coding. The network coding paradigm departs from the traditional routing principle where information is simply relayed by the network nodes towards the destination, and introduces some intelligence in the network through coding at the intermediate nodes. This in-network data processing has been proved to substantially improve the performance of data delivery systems in terms of throughput and error resilience in networks with high path diversity. Motivated by the promising results in the network coding research, we focus in this thesis on the design of network coding algorithms for simultaneous transmission of multiple data sources in overlay networks. We investigate several problems that arise in the context of inter-session network coding, namely (i) decoding delay minimization in inter-session network coding, (ii) distributed rate allocation for inter-session network coding and (iii) correlation-aware decoding of incomplete network coded data. We start by proposing a novel framework for data delivery from multiple sources to multiple clients in an overlay wireline network, where intermediate nodes employ randomized inter-session network coding. We consider networks with high resource diversity, which creates network coding opportunities with possibly large gains in terms of throughput, delay and error robustness. However, the coding operations in the intermediate nodes must be carefully designed in order to enable efficient data delivery. We look at the problem from the decoding delay perspective and design solutions that lead to a small decoding delay at clients through proper coding and rate allocation. We cast the optimization problem as a rate allocation problem, which seeks for the coding operations that minimize the average decoding delay in the client population. We demonstrate the validity of our algorithm through simulations in representative network topologies. The results show that an effective combination of intra- and inter-session network coding based on randomized linear coding permits to reach small decoding delays and to better exploit the available network resources even in challenging network settings. Next, we design a distributed rate allocation algorithm where the users decide locally how many intra- and inter-session network coded packets should be requested from the parent nodes in order to get minimal decoding delay. The capability to take coding decisions locally with only a partial knowledge of the network statistics is of crucial importance for applications where users are organized in dynamic overlay networks. We propose a receiver-driven communication protocol that operates in two rounds. First, the users request and obtain information regarding the network conditions and packet availability in their local neighborhood. Then, every user independently optimizes the rate allocation among different possible intra- and inter-session packet combinations to be requested from its parents. We also introduce the novel concept of equivalent flows, which permits to efficiently estimate the expected number of packets that are necessary for decoding and hence to simplify the rate allocation process. Experimental results indicate that our algorithm is capable of eliminating the bottlenecks and reducing the decoding delay of users with limited resources. We further investigate the application of the proposed distributed rate allocation algorithm to the transmission of video sequences and validate the performance of our system using the NS-3 simulator. The simulation results show that the proposed rate allocation algorithm is successful in improving the quality of the delivered video compared to intra-session network coding based solutions. Finally, we investigate the problem of decoding the source information from an incomplete set of network coded data with the help of source priors in a finite algebraic field. The inability to form a complete decoding system can be often caused by transmission losses or timing constraints imposed by the application. In this case, exact reconstruction of the source data by conventional algorithms such as Gaussian elimination is not feasible; however, partial recovery of the source data may still be possible, which can be useful in applications where approximate reconstruction is informative. We use the statistical characteristics of the source data in order to perform approximate decoding. We first analyze the performance of a hypothetical maximum a posteriori decoder, which recovers the source data from an incomplete set of network coded data given the joint statistics of the sources. We derive an upper bound on the probability of erroneous source sequence decoding as a function of the system parameters. We then propose a constructive solution to the approximate decoding problem and design an iterative decoding algorithm based on message passing, which jointly considers the network coding and the correlation constraints. We illustrate the performance of our decoding algorithm through extensive simulations on synthetic and real data sets. The results demonstrate that, even by using a simple correlation model expressed as a correlation noise between pairs of sources, the original source data can be partially decoded in practice from an incomplete set of network coded symbols. Overall, this thesis addresses several important issues related to the design of efficient data delivery methods with inter-session network coding. Our novel framework for decoding delay minimization can impact the development of practical inter-session network coding algorithms that are appropriate for applications with low delay requirements. Our rate allocation algorithms are able to exploit the high resource diversity of modern networking systems and represent an effective alternative in the development of distributed communication systems. Finally, our algorithm for data recovery from incomplete network coded data using correlation priors can contribute significantly to the improvement of the delivered data quality and provide new insights towards the design of joint source and network coding algorithms.

Modern cryptography pushed forward the need of having provable security. Whereas ancient cryptography was only relying on heuristic assumptions and the secrecy of the designs, nowadays researchers try to make the security of schemes to rely on mathematical problems which are believed hard to solve. When doing these proofs, the capabilities of potential adversaries are modeled formally. For instance, the black-box model assumes that an adversary does not learn anything from the inner-state of a construction. While this assumption makes sense in some practical scenarios, it was shown that one can sometimes learn some information by other means, e.g., by timing how long the computation take. In this thesis, we focus on two different areas of cryptography. In both parts, we take first a theoretical point of view to obtain a result. We try then to adapt our results so that they are easily usable for implementers and for researchers working in practical cryptography. In the first part of this thesis, we take a look at post-quantum cryptography, i.e., at cryptographic primitives that are believed secure even in the case (reasonably big) quantum computers are built. We introduce HELEN, a new public-key cryptosystem based on the hardness of the learning from parity with noise problem (LPN). To make our results more concrete, we suggest some practical instances which make the system easily implementable. As stated above, the design of cryptographic primitives usually relies on some well-studied hard problems. However, to suggest concrete parameters for these primitives, one needs to know the precise complexity of algorithms solving the underlying hard problem. In this thesis, we focus on two recent hard-problems that became very popular in post-quantum cryptography: the learning with error (LWE) and the learning with rounding problem (LWR). We introduce a new algorithm that solves both problems and provide a careful complexity analysis so that these problems can be used to construct practical cryptographic primitives. In the second part, we look at leakage-resilient cryptography which studies adversaries able to get some side-channel information from a cryptographic primitive. In the past, two main disjoint models were considered. The first one, the threshold probing model, assumes that the adversary can put a limited number of probes in a circuit. He then learns all the values going through these probes. This model was used mostly by theoreticians as it allows very elegant and convenient proofs. The second model, the noisy-leakage model, assumes that every component of the circuit leaks but that the observed signal is noisy. Typically, some Gaussian noise is added to it. According to experiments, this model depicts closely the real behaviour of circuits. Hence, this model is cherished by the practical cryptographic community. In this thesis, we show that making a proof in the first model implies a proof in the second model which unifies the two models and reconciles both communities. We then look at this result with a more practical point-of-view. We show how it can help in the process of evaluating the security of a chip based solely on the more standard mutual information metric.

Ali Galip Bayrak, Philip Brisk, Paolo Ienne, Francesco Regazzoni

In cryptography, side channel attacks, such as power analysis, attempt to uncover secret information from the physical implementation of cryptosystems rather than exploiting weaknesses in the cryptographic algorithms themselves. The design and implementation of physically secure cryptosystems is a challenge for both hardware and software designers. Measuring and evaluating the security of a system is manual and empirical, which is costly and time consuming; this work demonstrates that it is possible to automate these processes. We introduce a systematic methodology for automatic application of software countermeasures and demonstrate its effectiveness on an AES software implementation running on an 8-bit AVR microcontroller. The framework identifies the most vulnerable instructions of the implementation to power analysis attacks, and then transforms the software using a chosen countermeasure to protect the vulnerable instructions. Lastly, it evaluates the security of the system using an information-theoretic metric and a direct attack.