Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Advances in camera sensor technology and its manufacturing process now allow high quality image acquisition with low-cost devices. Moreover, the latest significant increase in computational capacity of the processing units enables incorporation of more complex machine learning and deep learning methods within vision systems, expanding the capabilities of a typical camera system. A potential limitation of such complex and highly accurate machine learning and data processing methods is their high cost in terms of power and area. This limitation becomes more critical when multiple and/or wireless camera systems and come into question since such systems need to operate with limited power, memory and processing resources. Even though custom hardware solutions could solve this limitation problem, they however lack flexibility and hence are less practical. An embedded vision system with extended capabilities needs to be designed with a good trade-off between quality, speed, power consumption and flexibility.
A good trade off for an enhanced wireless multi-camera vision system may be provided by optimizing the system design at different levels. A common system-level approach to high-complexity systems is to partition the computational load and distribute it into local nodes. This corresponds to embedding computationally heavy operations into the camera units in a vision system which would reduce the bandwidth and overall power consumption. A camera equipped with a processing unit and memory that locally processes image data is called smart camera and can help overcome power, memory and processing resource limitations.
This thesis aims at designing a novel smart camera concept, and presents the hardware solutions to the proposed system design. Accordingly, in this thesis is proposed a flexible smart camera architecture which processes the pixel stream on-the-fly and produces metadata with low-latency, still providing high power and area efficiency. In particular, three processing blocks namely moving object detection, keypoint detection and description and cellular neural networks were implemented to illustrate the system design. In addition, proposed blocks are used in several applications such as omnidirectional image reconstruction, high resolution surveillance, polarimetry and wireless smart camera networks to show the flexibility of use of the proposed system in a wide-range applications.
, ,
Martin Vetterli, Eric Bezzam, Sepand Kashani, Matthieu Martin Jean-André Simeoni