This paper introduces a pipeline to parametrically sample and render static multi-task vision datasets from comprehensive 3D scans from the real-world. In addition to enabling interesting lines of research, we show the tooling and generated data suffice to train robust vision models. Familiar architectures trained on a generated starter dataset reached state-of-the-art performance on multiple common vision tasks and benchmarks, despite having seen no benchmark or non-pipeline data. The depth estimation network outperforms MiDaS and the surface normal estimation network is the first to achieve human-level performance for in-the-wild surface normal estimation-at least according to one metric on the OASIS benchmark.
Pascal Fua, Nikita Durasov, Doruk Oner, Minh Hieu Lê