Advancements in imaging technology made commercially available cameras cheaper, easily accessible with higher resolution than ever and with complex image capturing features. Today, it is estimated that more than one billion cameras sold every year. However, the progress on imaging technology is slowly reaching its limits imposed by the nature of the technology. Diffraction limits the number of effective pixels in a sensor, technology does not allow pixels to be smaller and lens systems are becoming more complex as sensor technology continues to advance. In order to enhance the capabilities of traditional imaging systems, researchers are increasing the available computing power by combining imagers with FPGAs and GPUs. Enormous computational power provided by modern computational systems is increasing the imaging capabilities of the current cameras. One way is to combine multiple cameras with FPGAs to create new possibilities for image capturing systems. This thesis focuses on FPGA based camera systems and applications. The goal of the introduced multiple camera systems in this work is to create real-time videos with wider field-of-view by distributing the tasks among the camera nodes. It is achieved by carefully placing multiple cameras on a hemispherical dome and adding communication features among the cameras. Cameras are able to create 360 degree view by utilizing the smart features added on the FPGAs. Designed distributed algorithm shares the reconstruction load evenly among the nodes, thus reduces the problems caused by the additional cameras in the multiple camera systems. Presented Panoptic system achieves higher performance omnidirectional video compared to its previous implementations. This thesis also introduces a head mounted display based viewing systems to render omnidirectional videos for human visual system. Another important aspect focused on this thesis is real-time resolution enhancement. Increasing resolution of the current imaging systems after fabrication can be achieved by enhancement methods during post processing. To this aim, a real-time image registration and real-time super-resolution algorithms are designed to be implemented on FPGA based camera systems. The real-time image registration algorithm aims to calculate the optical flow between the images to understand motion among the observations. The algorithm allows images to be registered in a finer grid and allows the super-resolution algorithm to enhance the image. This thesis introduces hardware implementations for super-resolution algorithms that are previously introduced in the literature. Many super-resolution methods are complex and computationally expensive implementations, thus real-time implementation is a challenging issue. We will discuss the super-resolution algorithms and provide hardware implementations for two well known super-resolution algorithms. Last but not least, combining the industry standard MIPI communication scheme with FPGAs are discussed. MI
Frédéric Courbin, Cameron Alexander Campbell Lemon