**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Perturbative Techniques in Hadron Collider Physics

Abstract

High-energy particle physics is going through a crucial moment of its history, one in which it can finally aspire to give a precise answer to some of the fundamental questions it has been conceived for. On the one side, the theoretical picture describing the elementary strong and electroweak interactions below the TeV scale, the Standard Model, has been well consolidated over the decades by the observation and the precise characterization of its constituents. On the other hand, the enormous technological potentialities nowadays available, and the skills accumulated in decades of collider experiments with increasingly high complexity, render for the first time plausible the possibility of addressing complicated and conceptually deep questions like the ones at hand. The best incarnation of this high level of sophistication is the CERN Large Hadron Collider (LHC), the most powerful experimental apparatus ever built, which is designed to shed light on the true nature of fundamental interactions at energies never attained before, and which has already started to open a new era in physics with the recent discovery of the longed-for Higgs boson, a true milestone for the human knowledge as well as one of the most important discoveries in the modern epoch. The knowledge that has been and is going to be reached in these crucial years would of course not be conceivable without a deep interplay between the theoretical and the experimental efforts. In particular, on the theoretical side, not only there are wide groups of researchers devoted to building possible extensions to the Standard Model, which draws the guidelines of current and future experiments, but also there is a vast community whose research is rather aimed at the precise predictions of all the physical observables that could be measured at colliders, and at the systematic improvement of the approximations that currently constrain such predictions. On top of representing the state-of-the-art of the human understanding of the properties that regulate elementary-particle interactions and of the formalisms that describe them, the developments of this line of research have an immediate and significant impact on experiments. Firstly, these detailed calculations are the very theoretical predictions against which experimental data are compared, so they are crucial in establishing the validity or not of the theories according to which they are performed. Secondly, the signals one wants to extract from data at modern colliders are so tiny and difficult to single out that the experimental searches themselves need be supplemented by a detailed work of theoretical modelling and simulation. In this respect, high-precision computations play an essential role in all analysis strategies devised by experimental collaborations, and in many aspects of the detector calibration. It is clear that, for theoretical computations to be useful in experimental analyses and simulations, the predictions they yield should be reliable for all possible configurations of the particles to be detected. Thus the key feature for the present theoretical collider physics is not particularly the computation of observables with high precision only in a limited region of the phase space, but the capability of combining (‘matching’) in a consistent way different approaches, each of which is reliable in a particular kinematic regime. With this perspective, matching techniques represent one of the most promising and successful theoretical frameworks currently available, and are considered as eminently valuable tools both on the theoretical and on the experimental sides. Matched computations are based on a perturbation-theory approach for the description of configurations in which the scattering products are well separated and/or highly energetic: in particular the precision currently attained for all but a few of the relevant processes within the Standard Model is the next-to-leading order (NLO) in powers of the strong quantum-chromodynamics (QCD) coupling constant αS; for the description of configurations in which the particles outgoing the collisions are close to each other and/or have low energy, it can be shown that the perturbation-theory expansion breaks down, and then a complementary method, like the parton shower Monte Carlo (PSMC), has instead to be employed. The task of matching is precisely that of giving a prediction that interpolates between the two approaches in a smooth and theoretically-consistent way. This thesis is focused on MC@NLO, a high-energy physics formalism capable of matching computations performed at the NLO in QCD to PSMC generators, in such a way as to retain the virtues of both approaches while discarding their mutual deficiencies. In particular, the thesis reports on the work successfully achieved in extending MC@NLO from its original numerical implementation, tailored on the HERWIG PSMC, to the other main PSMC programs currently employed by experimental collaborations, PYTHIA and Herwig++, confirming the advocated universality of the method. Differences in the various realizations are explained in detail both at the formal level and through the simulation of various Standard-Model reactions. Moreover we describe how the MC@NLO framework has been developed so as to render its implementation automatic with respect to the physics process one is about to simulate: beyond yielding an enormous increase in its potential for present and future collider phenomenology, and upgrading the standard of precision for high-energy computations to the NLO+PSMC level, this development allows for the first time the application of the MC@NLO formalism to a huge number of relevant and highly complicated reactions, through an implementation which is also easily usable by people well-outside the community of experts in QCD calculations. As example of this new version, called aMC@NLO, recent results are presented for complex scattering processes, involving four or five final-state particles. Finally, possible extensions of the framework to theories beyond the Standard Model, like the supersymmetric version of QCD, are briefly introduced.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (106)

Loading

Loading

Loading

Related concepts (56)

Higgs boson

The Higgs boson, sometimes called the Higgs particle, is an elementary particle in the Standard Model of particle physics produced by the quantum excitation of the Higgs field, one of the fields in

Large Hadron Collider

The Large Hadron Collider (LHC) is the world's largest and highest-energy particle collider. It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaborat

Parton (particle physics)

In particle physics, the parton model is a model of hadrons, such as protons and neutrons, proposed by Richard Feynman. It is useful for interpreting the cascades of radiation (a parton shower) produ

The Large Hadron Collider (LHC) has been producing pp collisions at 7 and 8 TeV since 2010 and promises a new era of discoveries in particle physics. One of its experiments, the Large Hadron Collider beauty (LHCb) experiment, was constructed to study CP violation in the B meson system. In addition to B physics, new Physics beyond the Standard Model can also be searched for at this single-arm forward spectrometer. With the different sub-detectors and the high resolution of the tracking system, the LHCb detector has the ability to search for heavy, long-lived and charged particles, which are predicted by extensions of the Standard Model. One of these extensions, the minimal Gauge Mediated Supersymmetry Breaking (mGMSB), proposes such a particle, named stau (τ~) - the SUSY bosonic counterpart of the heavy lepton tau (τ). The theory proposes that the staus may be pair-produced in pp collisions or in the decays of heavier particles, and have only electromagnetic interactions with the atoms of the medium like the muons. Therefore, we expect that at the energy of the LHC these particles can be produced if they do exist and that we have a chance to discover them at LHCb, as well as at the other experiments of the LHC. This thesis is dedicated to the search for stau pairs produced in pp collisions at the centre-of-mass energies √s = 7 and 8 TeV in the LHCb detector. For this purpose, we generated the stau pairs with seven different particle masses ranging from 124 to 309 GeV/c2 and simulated their path through the LHCb detector, as well as their muon background from the decays Z0, γ∗ → μ+μ−. Based on the results from the simulation, a set of cuts are then defined to select the stau pairs. Some muon pairs at high energies will also pass the selection cuts. Thus, to separate the stau pairs from the muon pairs, the Neural Network technique has been used. A first Neural Network has been used to distinguish the stau tracks from the muon tracks using their signals left in the sub-detectors: the VELO silicon detector, the electromagnetic calorimeter, the hadron calorimeter and the RICH detectors. Then, two methods to select the stau pairs have been developed: the first one is based on the product of the two responses from the first Neural Network (NN1) for the two tracks, the second one employs a second Neural Network to separate the stau pairs from the muon pairs by using the above product of the two NN1 responses and the invariant mass of pair. Finally, a favourable region for the staus finding has been defined and the expected numbers of stau and muon pairs in this region have been evaluated. The training of the Neural Network has been achieved with the Monte Carlo variables, then the trained Neural Network has been used to classify the data. The data used in our work were collected by the LHCb experiment in 2011 and 2012 and correspond to integrated luminosities of 1 fb−1 at √s = 7 TeV and of 2 fb−1 at √s = 8 TeV. No significant excess of signal has been observed. Upper limits at 95% CL on the cross section for stau pair production in pp collisions at √s = 7 and 8 TeV have been computed by using the profile likelihood method, which is derived from the well known Feldman and Cousins method.

Currently, the best theoretical description of fundamental matter and its gravitational interaction is given by the Standard Model (SM) of particle physics and Einstein's theory of General Relativity (GR). These theories contain a number of seemingly unrelated scales. While Newton's gravitational constant and the mass of the Higgs boson are parameters in the classical action, the masses of other elementary particles are due to the electroweak symmetry breaking. Yet other scales, like ΛQCD associated to the strong interaction, only appear after the quantization of the theory. We reevaluate the idea that the fundamental theory of nature may contain no fixed scales and that all observed scales could have a common origin in the spontaneous break-down of exact scale invariance. To this end, we consider a few minimal scale-invariant extensions of GR and the SM, focusing especially on their cosmological phenomenology. In the simplest considered model, scale invariance is achieved through the introduction of a dilaton field. We find that for a large class of potentials, scale invariance is spontaneously broken, leading to induced scales at the classical level. The dilaton is exactly massless and practically decouples from all SM fields. The dynamical break-down of scale invariance automatically provides a mechanism for inflation. Despite exact scale invariance, the theory generally contains a cosmological constant, or, put in other words, flat spacetime need not be a solution. We next replace standard gravity by Unimodular Gravity (UG). This results in the appearance of an arbitrary integration constant in the equations of motion, inducing a run-away potential for the dilaton. As a consequence, the dilaton can play the role of a dynamical dark-energy component. The cosmological phenomenology of the model combining scale invariance and unimodular gravity is studied in detail. We find that the equation of state of the dilaton condensate has to be very close to the one of a cosmological constant. If the spacetime symmetry group of the gravitational action is reduced from the group of all diffeomorphisms (Diff) to the subgroup of transverse diffeomorphisms (TDiff), the metric in general contains a propagating scalar degree of freedom. We show that the replacement of Diff by TDiff makes it possible to construct a scale-invariant theory of gravity and particle physics in which the dilaton appears as a part of the metric. We find the conditions under which such a theory is a viable description of particle physics and in particular reproduces the SM phenomenology. The minimal theory with scale invariance and UG is found to be a particular case of a theory with scale and TDiff invariance. Moreover, cosmological solutions in models based on scale and TDiff invariance turn out to generically be similar to the solutions of the model with UG. In usual quantum field theories, scale invariance is anomalous. This might suggest that results based on classical scale invariance are necessarily spoiled by quantum corrections. We show that this conclusion is not true. Namely, we propose a new renormalization scheme which allows to construct a class of quantum field theories that are scale-invariant to all orders of perturbation theory and where the scale symmetry is spontaneously broken. In this type of theory, all scales, including those related to dimensional transmutation, like ΛQCD, appear as a consequence of the spontaneous break-down of the scale symmetry. The proposed theories are not renormalizable. Nonetheless, they are valid effective theories below a field-dependent cut-off scale. If the scale-invariant renormalization scheme is applied to the presented minimal scale-invariant extensions of GR and the SM, the goal of having a common origin of all scales, spontaneous breaking of scale invariance, is achieved.

Effective Field Theories (EFTs) allow a description of low energy effects of heavy new physics Beyond the Standard Model (BSM) in terms of higher dimensional operators among the SM fields. EFTs are not only an elegant and consistent way to describe heavy new physics but they represent, at the same time, a valuable experimental tool for collider searches. The Standard Model Effective Field Theory naturally parametrizes the space of models BSM and measuring its interactions is, nowadays, substantial part of the theoretical and the experimental program at the (HL-)LHC and at future colliders. In this thesis we address the theoretical challenges of this Beyond the Standard Model precision program, following three different paths.Firstly, we present some results towards the so-called high-$p_T$ program at the (HL-)LHC, targeting to measure energy growing effects of higher dimensional operators in the tail of kinematic distributions. Concretely, we focus on dilepton production and we study the sensitivity to flavor universal dimension-six operators interfering with the SM and enhanced by the energy. We produce theoretical predictions for the SM and the dim-6 EFT operators at NLO-QCD, including 1-loop EW logs. Our predictions are based on event reweighting of SM Montecarlo simulations and allow an easy scan of the multi-dimensional new physics parameter space on data. Furthermore we asses the impact of the various sources of theoretical uncertainties and we study the projected sensitivity of (HL-)LHC to the EFT interactions under consideration and to concrete BSM scenario.We then turn to future colliders and in particular to very high energy lepton colliders. In this context we study the potential of such machines with about 10 TeV center of mass energy to probe Higgs, ElectroWeak and Top physics at 100 TeV via precise measurements of EFT interactions. A peculiar aspect of so energetic ElectroWeak processes is the prominent phenomenon of the EW radiation. On one hand we find that consistent and sufficiently accurate predictions require resummations, that we perform at double logarithmic order. On the other hand we show how the study of the radiation pattern can enhance the sensitivity to new physics. We assess our results in Composite Higgs and Top scenarios and minimal Z' models.Finally, we move to a top-down perspective and we perform a phenomenological study of composite Higgs models with partially composite Standard Models quarks. Starting from maximally symmetric scenarios that realize minimal flavor violation, we test various assumptions for the flavor structure of the strong sector. Among the different models we consider, we find that there is an optimal amount of symmetries that protects from (chromo-)electric dipoles and reduces, at the same time, constraints from other flavor observables.