Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
We present a low-cost solution to perform the online and realistic representation of users using an array of depth cameras. The system is composed of a cluster of 10 Microsoft Kinect 2 cameras, each one associated to a compact NUC PC to stream live depth & color images to a master PC which reconstructs live the point cloud of the scene and can in particular show the body of users standing in the capture area. A custom geometric calibration procedure allows accurate reconstruction of the different 3D data streams. Despite the inherent limitations of depth cameras, in particular sensor noise, the system provides a convincing representation of the user's body, is not limited by changes in clothing (also during immersion), can capture complex poses and even interactions between two persons or with physical objects. The advantage of using depth cameras over conventional cameras is that little processing is required for dynamic reconstruction of unknown shapes, thus allowing true interactive applications. The resulting live 3D model can be inserted in any virtual environment (e.g. Unity 3D software integration plugin), and can be subject to all usual 3D manipulation and transformations.
Diego Felipe Paez Granados, Chen Yang