**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Weak gravitational lensing

Summary

While the presence of any mass bends the path of light passing near it, this effect rarely produces the giant arcs and multiple images associated with strong gravitational lensing. Most lines of sight in the universe are thoroughly in the weak lensing regime, in which the deflection is impossible to detect in a single background source. However, even in these cases, the presence of the foreground mass can be detected, by way of a systematic alignment of background sources around the lensing mass. Weak gravitational lensing is thus an intrinsically statistical measurement, but it provides a way to measure the masses of astronomical objects without requiring assumptions about their composition or dynamical state.
Methodology
Gravitational lensing acts as a coordinate transformation that distorts the images of background objects (usually galaxies) near a foreground mass. The transformation can be split into two terms, the convergence and shear. The convergence term ma

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (30)

Related concepts (24)

Lambda-CDM model

The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parameterization of the Big Bang cosmological model in which the universe contains three major components: first, a cosmological constant

Dark matter

Dark matter is a hypothetical form of matter thought to account for approximately 85% of the matter in the universe. Dark matter is called "dark" because it does not appear to interact with the elect

Gravitational lens

A gravitational lens is a distribution of matter (such as a cluster of galaxies) or a point particle between a distant light source and an observer that is capable of bending the light from the sour

Related publications (100)

Related courses (4)

PHYS-402: Astrophysics IV : observational cosmology

Cosmology is the study of the structure and evolution of the universe as a whole. This course describes the principal themes of cosmology, as seen
from the point of view of observations.

PHYS-401: Astrophysics III : stellar and galactic dynamics

The aim of this course is to acquire the basic knowledge on specific dynamical phenomena related to the origin, equilibrium, and evolution of star
clusters, galaxies, and galaxy clusters.

PHYS-439: Introduction to astroparticle physics

We present the role of particle physics in cosmology and in the description of astrophysical phenomena. We also present the methods and technologies for the observation of cosmic particles.

Loading

Loading

Loading

Related units (5)

Related lectures (24)

The purpose of this Thesis is to develop, test, and characterize different models attempting to tackle the problem of measurement of galaxy shapes applied in interferometric observations. Shape measurement is a tool for estimating the underlying shear due to weak gravitational lensing by large-scale foreground matter distributions. The recent enormous progress in radio interferometric imaging during the last years, with projects such as MeerKAT, ASKAP, and in future SKA, motivates the development of advanced algorithms that utilize radio observations for this task. In most of these algorithms, the main disadvantage is that the proposed models' parameters have to be estimated in advance, assuming certain prior knowledge on the form of the objects. Our motivation is to develop and evaluate frameworks that do not require significant prior information on this form to make accurate measurements.To achieve this goal, we move in two implementation directions. In the first one, we are based on an existing approach that applies an object decomposition procedure using a dictionary of shapelet functions. Our proposal goes some steps forward trying to estimate this decomposition's key size parameter from a model built on advanced regularization. More specifically, we create an over-redundant dictionary formed as the concatenation of several groups of orthogonal shapelet basis functions with different parameters, and we use structured sparsity penalties to estimate the size parameter of the best group. As an alternative development strategy, we form an algorithm that employs multi-resolution least-squares analysis, which attempts to identify the size value that minimizes the relative residual in the fitting.The second path, instead of making the shape estimation directly on radio interferometric data, performs image reconstruction from the visibilities and measures the ellipticity of the objects in the resulting images. For this purpose, we adopt a sparsity averaging analysis algorithm that has been developed in the past and restores with high precision the intensity image from the visibilities. We also implement a similar framework that uses the CLEAN algorithm for image reconstruction, which helps compare the results between the two options and evaluate the quality of the measurements achieved.All our models are tested using a collection of simulated data that include objects of different profile types, ellipticity, size, orientation, and position in the field of view. The visibilities generation is done using either a simulated Gaussian profile coverage or a realistic SKA-like one. Additionally, we present an initial study of the same algorithms on real objects from the COSMOS survey.

There is now very strong evidence that our Universe is undergoing an accelerated expansion period as if it were under the influence of a gravitationally repulsive “dark energy” component. Furthermore, most of the mass of the Universe seems to be in the form of non-luminous matter, the so-called “dark matter”. Together, these “dark” components, whose nature remains unknown today, represent around 96 % of the matter-energy budget of the Universe. Unraveling the true nature of the dark energy and dark matter has thus, obviously, become one of the primary goals of present-day cosmology. Weak gravitational lensing, or weak lensing for short, is the effect whereby light emitted by distant galaxies is slightly deflected by the tidal gravitational fields of intervening foreground structures. Because it only relies on the physics of gravity, weak lensing has the unique ability to probe the distribution of mass in a direct and unbiased way. This technique is at present routinely used to study the dark matter, typical applications being the mass reconstruction of galaxy clusters and the study of the properties of dark halos surrounding galaxies. Another and more recent application of weak lensing, on which we focus in this thesis, is the analysis of the cosmological lensing signal induced by large-scale structures, the so-called “cosmic shear”. This signal can be used to measure the growth of structures and the expansion history of the Universe, which makes it particularly relevant to the study of dark energy. Of all weak lensing effects, the cosmic shear is the most subtle and its detection requires the accurate analysis of the shapes of millions of distant, faint galaxies in the near infrared. So far, the main factor limiting cosmic shear measurement accuracy has been the relatively small sky areas covered. Next-generation of wide-field, multicolor surveys will, however, overcome this hurdle by covering a much larger portion of the sky with improved image quality. The resulting statistical errors will then become subdominant compared to systematic errors, the latter becoming instead the main source of uncertainty. In fact, uncovering key properties of dark energy will only be achievable if these systematics are well understood and reduced to the required level. The major sources of uncertainty resides in the shape measurement algorithm used, the convolution of the original image by the instrumental and possibly atmospheric point spread function (PSF), the pixelation effect caused by the integration of light falling on the detector pixels and the degradation caused by various sources of noise. Measuring the Cosmic shear thus entails solving the difficult inverse problem of recovering the shear signal from blurred, pixelated and noisy galaxy images while keeping errors within the limits demanded by future weak lensing surveys. Reaching this goal is not without challenges. In fact, the best available shear measurement methods would need a tenfold improvement in accuracy to match the requirements of a space mission like Euclid from ESA, scheduled at the end of this decade. Significant progress has nevertheless been made in the last few years, with substantial contributions from initiatives such as GREAT (GRavitational lEnsing Accuracy Testing) challenges. The main objective of these open competitions is to foster the development of new and more accurate shear measurement methods. We start this work with a quick overview of modern cosmology: its fundamental tenets, achievements and the challenges it faces today. We then review the theory of weak gravitational lensing and explains how it can make use of cosmic shear observations to place constraints on cosmology. The last part of this thesis focuses on the practical challenges associated with the accurate measurement of the cosmic shear. After a review of the subject we present the main contributions we have brought in this area: the development of the gfit shear measurement method, new algorithms for point spread function (PSF) interpolation and image denoising. The gfit method emerged as one of the top performers in the GREAT10 Galaxy Challenge. It essentially consists in fitting two-dimensional elliptical Sérsic light profiles to observed galaxy image in order to produce estimates for the shear power spectrum. PSF correction is automatic and an efficient shape-preserving denoising algorithm can be optionally applied prior to fitting the data. PSF interpolation is also an important issue in shear measurement because the PSF is only known at star positions while PSF correction has to be performed at any position on the sky. We have developed innovative PSF interpolation algorithms on the occasion of the GREAT10 Star Challenge, a competition dedicated to the PSF interpolation problem. Our participation was very successful since one of our interpolation method won the Star Challenge while the remaining four achieved the next highest scores of the competition. Finally we have participated in the development of a wavelet-based, shape-preserving denoising method particularly well suited to weak lensing analysis.

Aims. We introduce a novel approach to reconstructing dark matter mass maps from weak gravitational lensing measurements. The cornerstone of the proposed method lies in a new modelling of the matter density field in the Universe as a mixture of two components: (1) a sparsity-based component that captures the non-Gaussian structure of the field, such as peaks or halos at different spatial scales, and (2) a Gaussian random field, which is known to represent the linear characteristics of the field well.Methods. We propose an algorithm called MCALens that jointly estimates these two components. MCALens is based on an alternating minimisation incorporating both sparse recovery and a proximal iterative Wiener filtering.Results. Experimental results on simulated data show that the proposed method exhibits improved estimation accuracy compared to customised mass-map reconstruction methods.

2021