<p>The documentation section offers a deeper inside into the principles of our approach, the design concept and implementation details. It is intended to address scientists who are looking for a profound description of VA, professionals of virtual acoustics who are interested in the details of concepts and software developer who are evaluating the deployment of VA.<br/>
For all others, the <ahref="overview.html">overview page</a> and the <ahref="start.html">getting stated section</a> should be first point of contact.</p>
<h3>The Virtual Acoustics framework</h3>
<p>
VABase
VANet VACore
VAServer
VAMatlab VAPython VACS VALua
</p>
<h3>Virtual Acoustics core concept</h3>
<p>VirtualAcoustics has a lot of components, each intended to perform a certain task. It is designed for maximum flexibility and compatibility. It can be devided into</p>
<ul>
<li>Virtual scene manager</li>
...
...
@@ -145,8 +115,6 @@
<p>Rendering modules are the heart of VA. They turn a virtual scene with all its inputs and datasets into audible sound. They adapt any modification - like a head rotation - and render the audio accordingly. There are different types of renderers available, and </p>
<p>Virtual Acoustics is a mighty tool for auralization and sound reproduction. It is created for professional applications and is mainly used by scientists. It is designed to offer highest flexibility, which comes at a prices. It is far from easy to configure VA, especially of you want to use loudspeaker-based sound playback.
<p>VirtualAcoustics has a lot of components, each intended to perform a certain task. It is designed for maximum flexibility and compatibility. It can be devided into</p>