**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Computation Of A 30750-Bit Binary Field Discrete Logarithm

Robert Granger, Thorsten Kleinjung, Arjen Lenstra, Benjamin Pierre Charles Wesolowski

*AMER MATHEMATICAL SOC, *2021

Journal paper

Journal paper

Abstract

This paper reports on the computation of a discrete logarithm in the finite field F-230750, breaking by a large margin the previous record, which was set in January 2014 by a computation in F-29234. The present computation made essential use of the elimination step of the quasi-polynomial algorithm due to Granger, Kleinjung and Zumbragel, and is the first large-scale experiment to truly test and successfully demonstrate its potential when applied recursively, which is when it leads to the stated complexity. It required the equivalent of about 2900 core years on a single core of an Intel Xeon Ivy Bridge processor running at 2.6 GHz, which is comparable to the approximately 3100 core years expended for the discrete logarithm record for prime fields, set in a field of bit-length 795, and demonstrates just how much easier the problem is for this level of computational effort. In order to make the computation feasible we introduced several innovative techniques for the elimination of small degree irreducible elements, which meant that we avoided performing any costly Grobner basis computations, in contrast to all previous records since early 2013. While such computations are crucial to the L(1/4 + o(1)) complexity algorithms, they were simply too slow for our purposes. Finally, this computation should serve as a serious deterrent to cryptographers who are still proposing to rely on the discrete logarithm security of such finite fields in applications, despite the existence of two quasi-polynomial algorithms and the prospect of even faster algorithms being developed.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (14)

Discrete logarithm

In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that bx = a. Analogously, in any group G, powers bk can be defined for all i

Finite field

In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the op

Finite field arithmetic

In mathematics, finite field arithmetic is arithmetic in a finite field (a field containing a finite number of elements) contrary to arithmetic in a field with an infinite number of elements, like th

Related publications (6)

Loading

Loading

Loading

The complexity of today's autonomous robots poses a major challenge for Artificial Intelligence. These robots are equipped with sophisticated sensors and mechanical abilities that allow them to enter our homes and interact with humans. For example, today's robots are almost all equipped with vision and several of them can move over rough terrain with wheels or legs. The methods developed so far in Artificial Intelligence, however, are not yet ready to cope with the complexity of the information gathered through the robot sensors and the need for rapid action in partially unknown and dynamic environments. In this thesis, I will argue that the apparent complexity of the environment and of the robot brain can be significantly simplified if perception, behavior, and learning are allowed to co-develop on the same time scale. In doing so, robots become sensitive to, and actively exploit, characteristics of the environment that they can tackle within their own computational and physical constraints. This line of work is grounded on philosophical and psychological research showing that perception is an active process mediated by behavior. However, computational models of active vision are very rare and often rely on architectures that are preprogrammed to detect certain characteristics of the environment. Previous work have shown that complex visual tasks, such as position and size invariant shape recognition as well as navigation, can be tackled with remarkably simple neural architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons were evolved while freely interacting with their environments. I proceed further on this line of investigation and describe the application of this methodology in three situations, namely car driving with an omnidirectional camera, goal-oriented navigation of a humanoid robot, and cooperative tasks by two agents. I will show that these systems develop sensitivity to a number of oriented, retinotopic, visual features – oriented edges, corners, height – and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system that allow them to accomplish their goals. In a second set of experiments, I will show that active vision can be exploited by the robot to perform anticipatory exploration of the environment in a task that requires landmark-based navigation. Evolved robots exploit an internal expectation system that makes use of active exploration to check for expected events in the environment. I will describe a third set of experiments where, in addition to an evolutionary process, the visual system of the robot can develop receptive fields by means of unsupervised Hebbian learning and show that these receptive fields are significantly affected by the behavior of the system and differ from those predicted by most computational models of visual cortex. Finally, I will show that these robots replicate the performance deficiencies observed in experiments of motor deprivation with kitten when they are exposed to the same type of motor deprivations. Furthermore, the analyses of our robot brains suggest an explanation for the deficiencies observed in kitten that have not yet been fully understood.

The discrete logarithm problem (DLP) in finite fields of small characteristic recently enjoyed a dramatic series of breakthrough results and computational records, with its (heuristic) complexity dropping from subexponential to quasi-polynomial. While these results asymptotically render any cryptosystem relying on the hardness of such DLPs unusable, a question remained over whether the new techniques can weaken or indeed break any of the parameters proposed in the literature for pairing-based cryptographic protocols at the industry-standard 128-bit security level. In this talk I will first describe the ideas underlying the recent developments and then introduce some techniques which allow one to answer this question affirmatively. This is joint work with Thorsten Kleinjung and Jens Zumbragel.

2014We consider several "provably secure" hash functions that compute simple sums in a well chosen group (G,*). Security properties of such functions provably translate in a natural way to computational problems in G that are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi ∊ Li such that g1 * g2 *...* gk = 1G. Hardness of the problem in the respective groups follows from some "standard" assumptions used in public-key cryptology such as hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems. Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show that the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively by considerations about available attacks. By introducing "second-order" bounds on bounds on security, we expose the limits of such an approach to provable security. A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures, recent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard problems, suggests that the k-sum problem in some groups may be considered hard on its own, and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness. We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contrast to the original reductions that related to factoring or discrete logarithms. Although the original reductions remain valid, we measure security in a new, more aggressive way. This allows us to relax the parameters and hash faster. We obtain a function that is only three times slower compared to SHA-256 and is estimated to offer at least equivalent collision resistance. The speed can be doubled by the use of a special modulus, such a modified function is supported exclusively by the hardness of multiplicative k-sums modulo a power of two. Our efforts culminate in a new multiplicative k-sum function in finite fields that further generalizes the design of Very Smooth Hash. In contrast to the previous variants, the memory requirements of the new function are negligible. The fastest instance of the function expected to offer 128-bit collision resistance runs at 24 cycles per byte on an Intel Core i7 processor and approaches the 17.4 figure of SHA-256. The new functions proposed in this thesis do not provably achieve a usual security property such as preimage or collision resistance from a well-established assumption. They do however enjoy unconditional provable separation of inputs that collide. Changes in input that are small with respect to a well defined measure never lead to identical output in the compression function.