**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Asymptotic scale invariance and its consequences

Abstract

Scale invariance supplemented by the requirement of the absence of new heavy particles may play an important role in addressing the hierarchy problem. We discuss how the Standard Model may become scale invariant at the quantum level above a certain value of the Higgs field value without addition of new degrees of freedom and analyze phenomenological and cosmological consequences of this setup, in particular, possible metastability of the electroweak vacuum and Higgs inflation.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (14)

Related publications (25)

Standard Model

The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks.

Hierarchy problem

In theoretical physics, the hierarchy problem is the problem concerning the large discrepancy between aspects of the weak force and gravity. There is no scientific consensus on why, for example, the weak force is 1024 times stronger than gravity. A hierarchy problem occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment.

Cosmological constant

In cosmology, the cosmological constant (usually denoted by the Greek capital letter lambda: Λ), alternatively called Einstein's cosmological constant, is the constant coefficient of a term that Albert Einstein temporarily added to his field equations of general relativity. He later removed it. Much later it was revived and reinterpreted as the energy density of space, or vacuum energy, that arises in quantum mechanics. It is closely associated with the concept of dark energy.

Effective Field Theories have changed our understanding of Quantum Field Theories. This thesis shows several applications of this powerful tool in the context of the Standard Model and for searches of New Physics.The thesis starts with a review of the Standard Model and its open questions and is followed by an updated and systematic study of models of flavor in the context of Partial Compositeness in Composite Higgs theories. Following that, the question on how to measure the Wilson coefficients of the Standard Model effective operators at present and future experiments is addressed: first by using modern Machine Learning techniques by studying angular distributions for diboson production, followed then by a study on ElectroWeak radiation at a future Muon Collider and how to use it to better probe the new physics parameter space.The fourth chapter deals instead with applying Non-Relativistic Effective Theories to the study of exotic mesons in the Standard Model. The two competing interpretations, a molecule formed of two mesons or a compact tetraquark state, and their consequences are studied. In particular this study is done on the X(3872) exotic charmonium and the consequences of the two accidental tunings of this system are discussed.The last chapter addresses the problem of baryogenesis from the ElectroWeak phase transition. A new scalar sector is introduced that decouples the physics responsible for the generation of the baryon asymmetry from the weak scale. This helps solving the main problems that ElectroWeak baryogenesis models face, namely the large modifications to the Higgs physics and the need of large CP violating new effects.

Effective Field Theories (EFTs) allow a description of low energy effects of heavy new physics Beyond the Standard Model (BSM) in terms of higher dimensional operators among the SM fields. EFTs are not only an elegant and consistent way to describe heavy new physics but they represent, at the same time, a valuable experimental tool for collider searches. The Standard Model Effective Field Theory naturally parametrizes the space of models BSM and measuring its interactions is, nowadays, substantial part of the theoretical and the experimental program at the (HL-)LHC and at future colliders. In this thesis we address the theoretical challenges of this Beyond the Standard Model precision program, following three different paths.Firstly, we present some results towards the so-called high-$p_T$ program at the (HL-)LHC, targeting to measure energy growing effects of higher dimensional operators in the tail of kinematic distributions. Concretely, we focus on dilepton production and we study the sensitivity to flavor universal dimension-six operators interfering with the SM and enhanced by the energy. We produce theoretical predictions for the SM and the dim-6 EFT operators at NLO-QCD, including 1-loop EW logs. Our predictions are based on event reweighting of SM Montecarlo simulations and allow an easy scan of the multi-dimensional new physics parameter space on data. Furthermore we asses the impact of the various sources of theoretical uncertainties and we study the projected sensitivity of (HL-)LHC to the EFT interactions under consideration and to concrete BSM scenario.We then turn to future colliders and in particular to very high energy lepton colliders. In this context we study the potential of such machines with about 10 TeV center of mass energy to probe Higgs, ElectroWeak and Top physics at 100 TeV via precise measurements of EFT interactions. A peculiar aspect of so energetic ElectroWeak processes is the prominent phenomenon of the EW radiation. On one hand we find that consistent and sufficiently accurate predictions require resummations, that we perform at double logarithmic order. On the other hand we show how the study of the radiation pattern can enhance the sensitivity to new physics. We assess our results in Composite Higgs and Top scenarios and minimal Z' models.Finally, we move to a top-down perspective and we perform a phenomenological study of composite Higgs models with partially composite Standard Models quarks. Starting from maximally symmetric scenarios that realize minimal flavor violation, we test various assumptions for the flavor structure of the strong sector. Among the different models we consider, we find that there is an optimal amount of symmetries that protects from (chromo-)electric dipoles and reduces, at the same time, constraints from other flavor observables.

Frédéric Courbin, Richard Irving Anderson, Julien Lesgourgues, Mikhail Ivanov, Fabio Finelli, Adriano Agnello, Jie Wang, Florian Niedermann, Emre Ozulker, Melissa Joseph, Suresh Kumar, Hsin-Yu Chen

The standard Lambda Cold Dark Matter (Lambda CDM) cosmological model provides a good description of a wide range of astrophysical and cosmological data. However, there are a few big open questions that make the standard model look like an approximation to a more realistic scenario yet to be found. In this paper, we list a few important goals that need to be addressed in the next decade, taking into account the current discordances between the different cosmological probes, such as the disagreement in the value of the Hubble constant H-0, the sigma(8)-S-8 tension, and other less statistically significant anomalies. While these discordances can still be in part the result of systematic errors, their persistence after several years of accurate analysis strongly hints at cracks in the standard cosmological scenario and the necessity for new physics or generalisations beyond the standard model. In this paper, we focus on the 5.0 sigma tension between the Planck CMB estimate of the Hubble constant H-0 and the SH0ES collaboration measurements. After showing the H-0 evaluations made from different teams using different methods and geometric calibrations, we list a few interesting new physics models that could alleviate this tension and discuss how the next decade's experiments will be crucial. Moreover, we focus on the tension of the Planck CMB data with weak lensing measurements and redshift surveys, about the value of the matter energy density Omega(m), and the amplitude or rate of the growth of structure (sigma(8), f sigma(8)). We list a few interesting models proposed for alleviating this tension, and we discuss the importance of trying to fit a full array of data with a single model and not just one parameter at a time. Additionally, we present a wide range of other less discussed anomalies at a statistical significance level lower than the H-0-S-8 tensions which may also constitute hints towards new physics, and we discuss possible generic theoretical approaches that can collectively explain the non-standard nature of these signals. Finally, we give an overview of upgraded experiments and next-generation space missions and facilities on Earth that will be of crucial importance to address all these open questions. (C) 2022 The Author(s). Published by Elsevier B.V.