**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Learning from radiation at a very high energy lepton collider

Siyu Chen, Alfredo Glioti, Riccardo Rattazzi, Lorenzo Ricci, Andrea Wulzer

*SPRINGER, *2022

Journal paper

Journal paper

Abstract

We study the potential of lepton collisions with about 10 TeV center of mass energy to probe Electroweak, Higgs and Top short-distance physics at the 100 TeV scale, pointing out the interplay with the long-distance (100 GeV) phenomenon of Electroweak radiation. On one hand, we find that sufficiently accurate theoretical predictions require the resummed inclusion of radiation effects, which we perform at the double logarithmic order. On the other hand, we notice that short-distance physics does influence the emission of Electroweak radiation. Therefore the investigation of the radiation pattern can enhance the sensitivity to new short-distance physical laws. We illustrate these aspects by studying Effective Field Theory contact interactions in di-fermion and di-boson production, and comparing cross-section measurements that require or that exclude the emission of massive Electroweak bosons. The combination of the two types of measurements is found to enhance the sensitivity to the new interactions. Based on these results, we perform sensitivity projections to Higgs and Top Compositeness and to minimal Z' new physics scenarios at future muon colliders.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (22)

Theoretical physics

Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast

Electroweak interaction

In particle physics, the electroweak interaction or electroweak force is the unified description of two of the four known fundamental interactions of nature: electromagnetism (electromagnetic interac

Mass–energy equivalence

In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measuremen

Related publications (46)

Loading

Loading

Loading

The LHCb experiment (Large Hadron Collider beauty) is one of the four experiments under construction at the LHC (Large Hadron Collider) at CERN near Geneva. It is planned to start in 2007 and its goal is the study of b-quark physics. The LHC is a circular accelerator in which collide protons-protons at a center-of-mass energy of √s = 14 TeV. This generates a large number of high energy bb pairs which are predominantly produced in the same forward cone. The LHCb detector is therefore a forward single arm spectrometer designed to exploit the large bb production cross section (σbb ~ 500 μb) and to perform precise measurements of CP violation in b-hadrons decays. One of the actual greatest challenges in High Energy Physics is the discovery of the Higgs boson which is responsible for the Model Standard particles mass generation through the Spontaneous Symmetry Breaking process. The Higgs mass is not known and cannot be predicted by the theory. However the recent results of LEP at CERN have shown that mH0 > 114 GeV/c2. Below ~ 150 GeV/c2 the Higgs decay into two b-quarks H0 → bb dominates. The two quarks emitted back-to-back in the H0 rest frame form a string which fragments, giving rise to hadronization in jets containing b-hadrons. The aim of this thesis is to assess the feasibility to discover a Higgs boson with intermediate mass at LHCb by using the detector sensibility to b-hadrons in order to reconstruct these jets using jets reconstruction algorithms. The study is focused on the mechanisms in which the Higgs boson is produced in association with a gauge boson decaying leptonically H0 + W± → bb + ℓνℓ and H0 + Z0 → bb + ℓ+ℓ- for Higgs masses in the range 100 - 130 GeV/c2. The gauge bosons decay produces hard leptons quite often isolated from the b-jets. Hence an isolated lepton with high transverse momentum is required in order to reject the large QCD background. Several important background channels which also provide two b-quarks and an isolated lepton – like tt → W+b W-b, Z0 + W± → bb + ℓνℓ, Z0 + Z0 → bb + ℓ+ℓ-, W± + b-jets, Z0 + b-jets and generic bb – are studied in parallel. The idea is to find observables which behave differently for backgrounds and Higgs signal and to exploit these differences in the framework of a neural network, precisely in order to discriminate background from signal. The LHCb experiment needs a high capability to identify b-hadrons despite their very short lifetime τB ~ 1.5 · 10-12 s. The Vertex Locator (VeLo) is a sub-detector placed around the p-p interaction point which has to provide accurate measurements of the b-hadrons production and decay points by reconstructing secondary vertices. The second part of this thesis is a technical contribution to the development of the VeLo analogue transmission line. It consists in testing several hardware and software methods to improve the VeLo analogue transmission between the on-detector part of the readout and the off-detector electronics. Because the ~ 60 m line introduces an important attenuation, several cables and line drivers configurations with frequency and gain compensation are studied in order to obtain the best results in terms of signal-to-noise ratio and channel crosstalk. The different contributions to the noise are also studied and an estimation of the contribution due to the Beetle pipeline non-uniformity is given in order to see if a specific correction is needed or if it can be suppressed by a standard common noise correction procedure.

Effective Field Theories (EFTs) allow a description of low energy effects of heavy new physics Beyond the Standard Model (BSM) in terms of higher dimensional operators among the SM fields. EFTs are not only an elegant and consistent way to describe heavy new physics but they represent, at the same time, a valuable experimental tool for collider searches. The Standard Model Effective Field Theory naturally parametrizes the space of models BSM and measuring its interactions is, nowadays, substantial part of the theoretical and the experimental program at the (HL-)LHC and at future colliders. In this thesis we address the theoretical challenges of this Beyond the Standard Model precision program, following three different paths.Firstly, we present some results towards the so-called high-$p_T$ program at the (HL-)LHC, targeting to measure energy growing effects of higher dimensional operators in the tail of kinematic distributions. Concretely, we focus on dilepton production and we study the sensitivity to flavor universal dimension-six operators interfering with the SM and enhanced by the energy. We produce theoretical predictions for the SM and the dim-6 EFT operators at NLO-QCD, including 1-loop EW logs. Our predictions are based on event reweighting of SM Montecarlo simulations and allow an easy scan of the multi-dimensional new physics parameter space on data. Furthermore we asses the impact of the various sources of theoretical uncertainties and we study the projected sensitivity of (HL-)LHC to the EFT interactions under consideration and to concrete BSM scenario.We then turn to future colliders and in particular to very high energy lepton colliders. In this context we study the potential of such machines with about 10 TeV center of mass energy to probe Higgs, ElectroWeak and Top physics at 100 TeV via precise measurements of EFT interactions. A peculiar aspect of so energetic ElectroWeak processes is the prominent phenomenon of the EW radiation. On one hand we find that consistent and sufficiently accurate predictions require resummations, that we perform at double logarithmic order. On the other hand we show how the study of the radiation pattern can enhance the sensitivity to new physics. We assess our results in Composite Higgs and Top scenarios and minimal Z' models.Finally, we move to a top-down perspective and we perform a phenomenological study of composite Higgs models with partially composite Standard Models quarks. Starting from maximally symmetric scenarios that realize minimal flavor violation, we test various assumptions for the flavor structure of the strong sector. Among the different models we consider, we find that there is an optimal amount of symmetries that protects from (chromo-)electric dipoles and reduces, at the same time, constraints from other flavor observables.

High-energy particle physics is going through a crucial moment of its history, one in which it can finally aspire to give a precise answer to some of the fundamental questions it has been conceived for. On the one side, the theoretical picture describing the elementary strong and electroweak interactions below the TeV scale, the Standard Model, has been well consolidated over the decades by the observation and the precise characterization of its constituents. On the other hand, the enormous technological potentialities nowadays available, and the skills accumulated in decades of collider experiments with increasingly high complexity, render for the first time plausible the possibility of addressing complicated and conceptually deep questions like the ones at hand. The best incarnation of this high level of sophistication is the CERN Large Hadron Collider (LHC), the most powerful experimental apparatus ever built, which is designed to shed light on the true nature of fundamental interactions at energies never attained before, and which has already started to open a new era in physics with the recent discovery of the longed-for Higgs boson, a true milestone for the human knowledge as well as one of the most important discoveries in the modern epoch. The knowledge that has been and is going to be reached in these crucial years would of course not be conceivable without a deep interplay between the theoretical and the experimental efforts. In particular, on the theoretical side, not only there are wide groups of researchers devoted to building possible extensions to the Standard Model, which draws the guidelines of current and future experiments, but also there is a vast community whose research is rather aimed at the precise predictions of all the physical observables that could be measured at colliders, and at the systematic improvement of the approximations that currently constrain such predictions. On top of representing the state-of-the-art of the human understanding of the properties that regulate elementary-particle interactions and of the formalisms that describe them, the developments of this line of research have an immediate and significant impact on experiments. Firstly, these detailed calculations are the very theoretical predictions against which experimental data are compared, so they are crucial in establishing the validity or not of the theories according to which they are performed. Secondly, the signals one wants to extract from data at modern colliders are so tiny and difficult to single out that the experimental searches themselves need be supplemented by a detailed work of theoretical modelling and simulation. In this respect, high-precision computations play an essential role in all analysis strategies devised by experimental collaborations, and in many aspects of the detector calibration. It is clear that, for theoretical computations to be useful in experimental analyses and simulations, the predictions they yield should be reliable for all possible configurations of the particles to be detected. Thus the key feature for the present theoretical collider physics is not particularly the computation of observables with high precision only in a limited region of the phase space, but the capability of combining (‘matching’) in a consistent way different approaches, each of which is reliable in a particular kinematic regime. With this perspective, matching techniques represent one of the most promising and successful theoretical frameworks currently available, and are considered as eminently valuable tools both on the theoretical and on the experimental sides. Matched computations are based on a perturbation-theory approach for the description of configurations in which the scattering products are well separated and/or highly energetic: in particular the precision currently attained for all but a few of the relevant processes within the Standard Model is the next-to-leading order (NLO) in powers of the strong quantum-chromodynamics (QCD) coupling constant αS; for the description of configurations in which the particles outgoing the collisions are close to each other and/or have low energy, it can be shown that the perturbation-theory expansion breaks down, and then a complementary method, like the parton shower Monte Carlo (PSMC), has instead to be employed. The task of matching is precisely that of giving a prediction that interpolates between the two approaches in a smooth and theoretically-consistent way. This thesis is focused on MC@NLO, a high-energy physics formalism capable of matching computations performed at the NLO in QCD to PSMC generators, in such a way as to retain the virtues of both approaches while discarding their mutual deficiencies. In particular, the thesis reports on the work successfully achieved in extending MC@NLO from its original numerical implementation, tailored on the HERWIG PSMC, to the other main PSMC programs currently employed by experimental collaborations, PYTHIA and Herwig++, confirming the advocated universality of the method. Differences in the various realizations are explained in detail both at the formal level and through the simulation of various Standard-Model reactions. Moreover we describe how the MC@NLO framework has been developed so as to render its implementation automatic with respect to the physics process one is about to simulate: beyond yielding an enormous increase in its potential for present and future collider phenomenology, and upgrading the standard of precision for high-energy computations to the NLO+PSMC level, this development allows for the first time the application of the MC@NLO formalism to a huge number of relevant and highly complicated reactions, through an implementation which is also easily usable by people well-outside the community of experts in QCD calculations. As example of this new version, called aMC@NLO, recent results are presented for complex scattering processes, involving four or five final-state particles. Finally, possible extensions of the framework to theories beyond the Standard Model, like the supersymmetric version of QCD, are briefly introduced.