The recent advance of computer technology pushed the computing power of the modern computer systems to unprecedented levels; modern processors moved from the conventional, scalar type of architecture to more sophisticated, parallel ones, combining fast processing speed clock with multiple processing units. This jump of the computing power made possible the real-time implementations of many 3D audio algorithms that was hard to imagine 10-15 years before; on the other hand, the permanent increase of customer demand for real-time manipulation of the multimedia content leads to the development and implementation of more complex, and respectively more computationally demanding algorithms. The goal to deliver such sophisticated solutions to the market in a short time is already a difficult task; to achieve efficient solutions in such short terms, squeezing the maximum of the computing power of the currently available technologies, is even more challenging. In my thesis, I propose a solution to this problem by designing and implementing a framework that allows the developer to quickly design, implement and test custom 3D audio algorithms in an efficient way, with minimum efforts to port the already available code to another platform. The proposed approach started from a preliminary technology review and analysis in terms of development tools to find out and evaluate the state-of-the-art from 3D audio point of view. However, the performed analysis revealed some important drawbacks of these solutions in terms of fast reconfigurability for the implementation of custom 3D audio algorithms, and their capability for a precise synchronization and management of multiple stream media applications. First, it is in some cases necessary to develop completely new code to achieve the desired results, due to a lack of flexibility in the algorithm configurations or to unavailable functionality. Secondly, if the integration of different solutions and APIs, running at different hardware and software levels is pursued, this causes some problems with the synchronization between them, since high level APIs often introduce high latencies. The result is in some cases the system instability with consequent degraded quality, or even loss, of audio content for live and networked scenarios. In the end, it was noticed that if the APIs for 3D sound are higher level and developer-friendly they normally bring with them high latencies, sometimes-rigid 3D model solutions or high portability costs. If on the other hand they are lower-level solutions conceived for platform-independent low latency development and true real-time performance, the development of existing and particularly new 3D models may result long if good and robust source code is not available. To improve the situation, a new approach has been conceived to develop an independent middle-to-low level DSP library, which can provide a mid-layer development tool to easily design and configure new 3D algorithms. This simplifie
Joshua Alexander Harrison Klein