**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Development and evaluation of galaxy shape measurement algorithms for radio interferometric data

Abstract

The purpose of this Thesis is to develop, test, and characterize different models attempting to tackle the problem of measurement of galaxy shapes applied in interferometric observations. Shape measurement is a tool for estimating the underlying shear due to weak gravitational lensing by large-scale foreground matter distributions. The recent enormous progress in radio interferometric imaging during the last years, with projects such as MeerKAT, ASKAP, and in future SKA, motivates the development of advanced algorithms that utilize radio observations for this task. In most of these algorithms, the main disadvantage is that the proposed models' parameters have to be estimated in advance, assuming certain prior knowledge on the form of the objects. Our motivation is to develop and evaluate frameworks that do not require significant prior information on this form to make accurate measurements.To achieve this goal, we move in two implementation directions. In the first one, we are based on an existing approach that applies an object decomposition procedure using a dictionary of shapelet functions. Our proposal goes some steps forward trying to estimate this decomposition's key size parameter from a model built on advanced regularization. More specifically, we create an over-redundant dictionary formed as the concatenation of several groups of orthogonal shapelet basis functions with different parameters, and we use structured sparsity penalties to estimate the size parameter of the best group. As an alternative development strategy, we form an algorithm that employs multi-resolution least-squares analysis, which attempts to identify the size value that minimizes the relative residual in the fitting.The second path, instead of making the shape estimation directly on radio interferometric data, performs image reconstruction from the visibilities and measures the ellipticity of the objects in the resulting images. For this purpose, we adopt a sparsity averaging analysis algorithm that has been developed in the past and restores with high precision the intensity image from the visibilities. We also implement a similar framework that uses the CLEAN algorithm for image reconstruction, which helps compare the results between the two options and evaluate the quality of the measurements achieved.All our models are tested using a collection of simulated data that include objects of different profile types, ellipticity, size, orientation, and position in the field of view. The visibilities generation is done using either a simulated Gaussian profile coverage or a realistic SKA-like one. Additionally, we present an initial study of the same algorithms on real objects from the COSMOS survey.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (11)

Measurement

Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events.
In other words, measurement is a process of determining how large or

Galaxy

A galaxy is a system of stars, stellar remnants, interstellar gas, dust, and dark matter bound together by gravity. The word is derived from

Weak gravitational lensing

While the presence of any mass bends the path of light passing near it, this effect rarely produces the giant arcs and multiple images associated with strong gravitational lensing. Most lines of si

Related publications (16)

Loading

Loading

Loading

There is now very strong evidence that our Universe is undergoing an accelerated expansion period as if it were under the influence of a gravitationally repulsive “dark energy” component. Furthermore, most of the mass of the Universe seems to be in the form of non-luminous matter, the so-called “dark matter”. Together, these “dark” components, whose nature remains unknown today, represent around 96 % of the matter-energy budget of the Universe. Unraveling the true nature of the dark energy and dark matter has thus, obviously, become one of the primary goals of present-day cosmology. Weak gravitational lensing, or weak lensing for short, is the effect whereby light emitted by distant galaxies is slightly deflected by the tidal gravitational fields of intervening foreground structures. Because it only relies on the physics of gravity, weak lensing has the unique ability to probe the distribution of mass in a direct and unbiased way. This technique is at present routinely used to study the dark matter, typical applications being the mass reconstruction of galaxy clusters and the study of the properties of dark halos surrounding galaxies. Another and more recent application of weak lensing, on which we focus in this thesis, is the analysis of the cosmological lensing signal induced by large-scale structures, the so-called “cosmic shear”. This signal can be used to measure the growth of structures and the expansion history of the Universe, which makes it particularly relevant to the study of dark energy. Of all weak lensing effects, the cosmic shear is the most subtle and its detection requires the accurate analysis of the shapes of millions of distant, faint galaxies in the near infrared. So far, the main factor limiting cosmic shear measurement accuracy has been the relatively small sky areas covered. Next-generation of wide-field, multicolor surveys will, however, overcome this hurdle by covering a much larger portion of the sky with improved image quality. The resulting statistical errors will then become subdominant compared to systematic errors, the latter becoming instead the main source of uncertainty. In fact, uncovering key properties of dark energy will only be achievable if these systematics are well understood and reduced to the required level. The major sources of uncertainty resides in the shape measurement algorithm used, the convolution of the original image by the instrumental and possibly atmospheric point spread function (PSF), the pixelation effect caused by the integration of light falling on the detector pixels and the degradation caused by various sources of noise. Measuring the Cosmic shear thus entails solving the difficult inverse problem of recovering the shear signal from blurred, pixelated and noisy galaxy images while keeping errors within the limits demanded by future weak lensing surveys. Reaching this goal is not without challenges. In fact, the best available shear measurement methods would need a tenfold improvement in accuracy to match the requirements of a space mission like Euclid from ESA, scheduled at the end of this decade. Significant progress has nevertheless been made in the last few years, with substantial contributions from initiatives such as GREAT (GRavitational lEnsing Accuracy Testing) challenges. The main objective of these open competitions is to foster the development of new and more accurate shear measurement methods. We start this work with a quick overview of modern cosmology: its fundamental tenets, achievements and the challenges it faces today. We then review the theory of weak gravitational lensing and explains how it can make use of cosmic shear observations to place constraints on cosmology. The last part of this thesis focuses on the practical challenges associated with the accurate measurement of the cosmic shear. After a review of the subject we present the main contributions we have brought in this area: the development of the gfit shear measurement method, new algorithms for point spread function (PSF) interpolation and image denoising. The gfit method emerged as one of the top performers in the GREAT10 Galaxy Challenge. It essentially consists in fitting two-dimensional elliptical Sérsic light profiles to observed galaxy image in order to produce estimates for the shear power spectrum. PSF correction is automatic and an efficient shape-preserving denoising algorithm can be optionally applied prior to fitting the data. PSF interpolation is also an important issue in shear measurement because the PSF is only known at star positions while PSF correction has to be performed at any position on the sky. We have developed innovative PSF interpolation algorithms on the occasion of the GREAT10 Star Challenge, a competition dedicated to the PSF interpolation problem. Our participation was very successful since one of our interpolation method won the Star Challenge while the remaining four achieved the next highest scores of the competition. Finally we have participated in the development of a wavelet-based, shape-preserving denoising method particularly well suited to weak lensing analysis.

Jean-Paul Richard Kneib, Johan Richard

The existence of strong lensing systems with Einstein radii covering the full mass spectrum, from similar to 1-2 '' (produced by galaxy scale dark matter haloes) to >10 '' (produced by galaxy cluster scale haloes) have long been predicted. Many lenses with Einstein radii around 1-2 '' and above 10 '' have been reported but very few in between. In this article, we present a sample of 13 strong lensing systems with Einstein radii in the range 3 ''-8 '' (or image separations in the range 6 ''-16 ''), i.e. systems produced by galaxy group scale dark matter haloes. This group sample spans a redshift range from 0.3 to 0.8. This opens a new window of exploration in the mass spectrum, around 10(13)-10(14) M-circle dot, a crucial range for understanding the transition between galaxies and galaxy clusters, and a range that have not been extensively probed with lensing techniques. These systems constitute a subsample of the Strong Lensing Legacy Survey (SL2S), which aims to discover strong lensing systems in the Canada France Hawaii Telescope Legacy Survey (CFHTLS). The sample is based on a search over 100 square degrees, implying a number density of similar to 0.13 groups per square degree. Our analysis is based on multi-colour CFHTLS images complemented with Hubble Space Telescope imaging and ground based spectroscopy. Large scale properties are derived from both the light distribution of elliptical galaxies group members and weak lensing of the faint background galaxy population. On small scales, the strong lensing analysis yields Einstein radii between 2.5 '' and 8 ''. On larger scales, strong lens centres coincide with peaks of light distribution, suggesting that light traces mass. Most of the luminosity maps have complicated shapes, implying that these intermediate mass structures may be dynamically young. A weak lensing signal is detected for 6 groups and upper limits are provided for 6 others. Fitting the reduced shear with a Singular Isothermal Sphere, we find sigma(SIS) similar to 500 km s(-1) with large error bars and an upper limit of similar to 900 km s(-1) for the whole sample (except for the highest redshift structure whose velocity dispersion is consistent with that of a galaxy cluster). The mass-to-light ratio for the sample is found to be M/L-i similar to 250 (solar units, corrected for evolution), with an upper limit of 500. This compares with mass-to-light ratios of small groups (with sigma(SIS) similar to 300 km s(-1)) and galaxy clusters (with sigma(SIS) > 1000 km s(-1)), thus bridging the gap between these mass scales. The group sample released in this paper will be complemented with other observations, providing a unique sample to study this important intermediate mass range in further detail.

2009, ,

Cosmic shear, that is weak gravitational lensing by the large-scale matter structure of the Universe, is a primary cosmological probe for several present and upcoming surveys investigating dark matter and dark energy, such as Euclid or WFIRST. The probe requires an extremely accurate measurement of the shapes of millions of galaxies based on imaging data. Crucially, the shear measurement must address and compensate for a range of interwoven nuisance effects related to the instrument optics and detector, noise in the images, unknown galaxy morphologies, colors, blending of sources, and selection effects. This paper explores the use of supervised machine learning as a tool to solve this inverse problem. We present a simple architecture that learns to regress shear point estimates and weights via shallow artificial neural networks. The networks are trained on simulations of the forward observing process, and take combinations of moments of the galaxy images as inputs. A challenging peculiarity of the shear measurement task, in terms of machine learning applications, is the combination of the noisiness of the input features and the requirements on the statistical accuracy of the inverse regression. To address this issue, the proposed training algorithm minimizes bias over multiple realizations of individual source galaxies, reducing the sensitivity to properties of the overall sample of source galaxies. Importantly, an observational selection function of these source galaxies can be straightforwardly taken into account via the weights. We first introduce key aspects of our approach using toy-model simulations, and then demonstrate its potential on images mimicking Euclid data. Finally, we analyze images from the GREAT3 challenge, obtaining competitively low multiplicative and additive shear biases despite the use of a simple training set. We conclude that the further development of suited machine learning approaches is of high interest to meet the stringent requirements on the shear measurement in current and future surveys. We make a demonstration implementation of our technique publicly available.