Photometric stereo, a computer vision technique for estimating the 3D shape of objects through images captured under varying illumination conditions, has been a topic of research for nearly four decades. In its general formulation, photometric stereo is an ...
Deep learning has revolutionized the field of computer vision, a success largely attributable to the growing size of models, datasets, and computational power.
Simultaneously, a critical pain point arises as several computer vision applications are deploye ...
Recent advancements in deep learning have revolutionized 3D computer vision, enabling the extraction of intricate 3D information from 2D images and video sequences. This thesis explores the application of deep learning in three crucial challenges of 3D com ...
Modern neuroscience research is generating increasingly large datasets, from recording thousands of neurons over long timescales to behavioral recordings of animals spanning weeks, months, or even years. Despite a great variety in recording setups and expe ...
We consider the problem of compressing an information source when a correlated one is available as side information only at the decoder side, which is a special case of the distributed source coding problem in information theory. In particular, we consider ...
To obtain a more complete understanding of material microstructure at the nanoscale and to gain profound insights into their properties, there is a growing need for more efficient and precise methods that can streamline the process of 3D imaging using a tr ...
The Joint Photographic Experts Group (JPEG) AI learning-based image coding system is an ongoing joint standardization effort between International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), and International Te ...
Reading out neuronal activity from three-dimensional (3D) functional imaging requires segmenting and tracking individual neurons. This is challenging in behaving animals if the brain moves and deforms. The traditional approach is to train a convolutional n ...
In this paper we propose a Monte Carlo maximum likelihood estimation strategy for discretely observed Wright–Fisher diffusions. Our approach provides an unbiased estimator of the likelihood function and is based on exact simulation techniques that are of s ...
In the current deep learning paradigm, the amount and quality of training data are as critical as the network architecture and its training details. However, collecting, processing, and annotating real data at scale is difficult, expensive, and time-consum ...