Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Parallel rendering of large polygonal models with transparency is challenging due to the need for alpha-correct blending and compositing, which is costly for very large models with high depth complexity and spatial overlap. In this paper we compare the performance of raster-based rendering methods on mesh models of neurons using two applications, one of which is specifically tailored to the neuroscience application domain, the other a general purpose visualization tool with domain specific additions. The first implements both sort-first and sort-last and uses a scene graph style traversal to cull objects, and dual depth peeling for order independent transparency, whilst the other uses a simpler brute force data-parallel approach with sort last composition. The advantages and trade offs of these approaches are discussed. We present the optimized algorithms needed to achieve interactive frame rates for a non-trivial, real-world parallel rendering scenario. We show that a generic data visualization application can provide competitive performance when optimizing its rendering pipeline, with some loss of capability over an optimized domain-specific application.