**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Set theory

Summary

Set theory is the branch of mathematical logic that studies sets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory, as a branch of mathematics, is mostly concerned with those that are relevant to mathematics as a whole.
The modern study of set theory was initiated by the German mathematicians Richard Dedekind and Georg Cantor in the 1870s. In particular, Georg Cantor is commonly considered the founder of set theory. The non-formalized systems investigated during this early stage go under the name of naive set theory. After the discovery of paradoxes within naive set theory (such as Russell's paradox, Cantor's paradox and the Burali-Forti paradox), various axiomatic systems were proposed in the early twentieth century, of which Zermelo–Fraenkel set theory (with or without the axiom of choice) is still the best-known and most studied.
Set theory is commonly employed as a foundational system for the who

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (3)

Related publications (16)

Loading

Loading

Loading

Related units (3)

Related courses (38)

CS-452: Foundations of software

The course introduces the foundations on which programs and programming languages are built. It introduces syntax, types and semantics as building blocks that together define the properties of a program part or a language. Students will learn how to apply these concepts in their reasoning.

CS-101: Advanced information, computation, communication I

Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics as diverse as mathematical reasoning, combinatorics, discrete structures & algorithmic thinking.

CS-550: Formal verification

We introduce formal verification as an approach for developing highly reliable systems. Formal verification finds proofs that computer systems work under all relevant scenarios. We will learn how to use formal verification tools and explain the theory and the practice behind them.

Related concepts (157)

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer

In mathematics, the axiom of choice, abbreviated AC or AoC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally

A set is the mathematical model for a collection of different things; a set contains elements or members, which can be mathematical objects of any kind: numbers, symbols, points

Garance Hélène Salomé Durr-Legoupil-Nicoud

The StatComp package is a Matlab statistical toolbox developed over the years by Dr. Testa and his students. It has been inspired by M. R. Brown’s paper Magnetohydrodynamic Turbulence: Observation and experiment [2]. It first performed the analysis of the edge magnetic turbulent field in the TCV. It started in 2015 by A. Yantchenko and has been constantly improved and supplemented since then. The last addition to the package was many separate functions for the ”big data” analysis of the results, done by S. Ogier-Collin. The entire code is currently under review for release in the MHD analysis package within the SPC’s General Analysis Toolkit. The present document reports the latest evolution of this package in the perspective of using the charac- terisation plasma turbulence to possibly provide useful information for the optimisation of real-time plasma control and the fusion performance of a tokamak. The mathematical theory of the StatComp analyses and some examples of application are presented in the section 2. The section 3 presents the evolution of the existing functions as well as the addition of the loading function for the electrostatic data from the edge of the plasma, and the multifractality and predictability analyses. These enhancements are put in the perspective of one particular usage: the characterisation of the turbulence in order optimise potentially plasma control. Then, the up-to-date running instructions and interpretation guidelines are detailed in the section 4. The latter are based on the output figures resulting of the analysis of a standard dataset constituted of a white noise sample, three fractional Brownian motions of different known Hurst index, of a linear ramp and of a sample of the solar wind. The section 5 shows the results of the test on four actual shots realised on the TCV tokamak. The varying parameters are the signs of the poloidal magnetic field and of the plasma current. The four shots are each the resultant of a positive or negative poloidal field and a positive or negative plasma current. The shape and position of the plasma in the vacuum vessel are the same for each shot as well as the amplitude of the varied parameters, i.e. the magnetic field and plasma current. The emphasis is made on the presentation and interpretation of the results obtained with the electrostatic data on the low-field side of the plasma. The obtained results are discussed along the limits of the package and its possible improvements in section 6 before concluding in section 7. In the appendix, the structures necessary to the use of the package are detailed and examples of run commands are presented. In order to offer to the reader a frame of reference for reflection, the main parameters and orders of magnitude related to the plasma shots in TCV are given. Some of the mathematical basis of the statistical theory are also elaborated to complete the description of the different tools of the package. Finally, the reduced bibliography of all the sources explicitly mentioned in this report is doubled by a second bibliography presenting a wider selection of relevant sources each accompanied with a brief description of its content and its link to the present study.

2022Daniel Harasim, Martin Alois Rohrmeier

Scales are a fundamental concept of musical practice around the world. They commonly exhibit symmetry properties that are formally studied using cyclic groups in the field of mathematical scale theory. This paper proposes an axiomatic framework for mathematical scale theory, embeds previous research, and presents the theory of maximally even scales and well-formed scales in a uniform and compact manner. All theorems and lemmata are completely proven in a modern and consistent notation. In particular, new simplified proofs of existing theorems such as the equivalence of non-degenerate well-formedness and Myhill's property are presented. This model of musical scales explicitly formalizes and utilizes the cyclic order relation of pitch classes.

A plethora of real world problems consist of a number of agents that interact, learn, cooperate, coordinate, and compete with others in ever more complex environments. Examples include autonomous vehicles, robotic agents, intelligent infrastructure, IoT devices, and so on. As more and more autonomous agents are deployed in the real-world, it will bring forth the need for novel algorithms, theory, and tools to enable coordination on a massive scale. In this thesis, we develop such tools to tackle two central challenges in multi-agent coordination research: solving allocation problems, and resource sharing, focusing on solutions that are scalable, practical, and applicable to real-world problems.In the first part of the thesis we tackle the problem of allocating resources to agents, i.e., solving a weighted matching problem. Real-world matching problems may occur in massively large systems, they are distributed and information-restrictive, and individuals have to reveal their preferences over the possible matches in order to get a high quality match, which brings forth significant privacy risks. As such, there are three main challenges: complexity, communication, and privacy.Our proposed approach, ALMA, is a practical heuristic designed for real-world, large-scale ($10^6$ agents) applications. It is based on a simple altruistic behavioral convention: agents have a higher probability to back-off from contesting a resource if they have good alternatives, potentially freeing the resource for some agent that does not. ALMA tackles all of the aforementioned challenges: it is decentralized, runs on-device, requires no inter-agent communication, converges in constant time -- under reasonable assumptions --, and provides strong, worst-case, privacy guarantees. Moreover, by incorporating learning we can mitigate the loss in social welfare and increase fairness. Finally, rational agents can use such simple conventions, along with an arbitrary signal from the environment, to learn a correlated equilibrium for accessing a set resources, under high congestion.In the second part of the thesis we focus on a critical open problem: the question of cooperation in socio-ecological and socio-economical systems, and sustainability in the use of common-pool resources. In recent years, learning agents, especially deep reinforcement learning agents, have become ubiquitous in such systems. Yet, scaling to environments with a large number of agents and low observability continues to be a challenge. In our work, we focus on common-pool resources. Individuals face strong incentives to appropriate, which results in overuse and even the depletion of the resources. Our goal is to apply simple interventions to steer the population to desirable states.We propose a simple, yet powerful, and robust technique: allow agents to observe an arbitrary common signal from the environment. The agents learn to couple their policies, and avoid depletion in a wider range of settings, while achieving higher social welfare and convergence speed.Finally, we propose a practical approach to computing market prices and allocations via a deep reinforcement learning policymaker agent. Compared to the idealized market equilibrium outcome -- which can not always be efficiently computed -- our policymaker is much more flexible, allowing us to tune the prices with regard to diverse objectives such as sustainability and resource wastefulness, fairness, buyers' and sellers' welfare, etc.

Related lectures (48)