Some modifications for a more fluent reading experience in overview page

parent cd534cde
......@@ -98,7 +98,7 @@
<h4>Low latency, efficient real-time processing and flexible resource management</h4>
<p>Input-output latency is crucial for any interactive application. VA tries to achieve minimal latency wherever possible, because latency of subsequent components add up. As long as latency is kept low, a human listener will not notice small delays during scene updates, resulting in a convincing live system, where interaction directly leads to the expected effect (without waiting for the system to process).<br />
VA supports real-time capability by establishing flexible data management and processing modules that are lightweight and handle updates efficiently. For example, the FIR filtering modules use a partitioned block convolution resulting in update latencies (at least for the early part of filters) of a single audio block - which usually means a couple of milliseconds. Remotely updating long room impulse responses using Matlab can easily hit 1000 Hz update rates, which under normal circumstances is about three times more a block-based streaming sound card provides - and by far more a dedicated graphics rendering processor achieves, which is often the driving part of scene modifications.<br />
However, this comes at a price: VA is not trading computational resources over update rates. Advantage is taken by improvement in the general purpose processing power available at present as well as more efficient software libraries. Limitations are solely imposed by the provided processing capacity, and not by the framework. Therefore, VA will plainly result in audio dropouts or complete silence, if the computational power is not sufficient for rendering and reproducing the given scene with the configuration used. Simply put, if you request too much, VA will stop auralizing correctly. Usually, the number of paths between a sound source and a sound receiver that can effectively be processed can be reduced to an amount where the system can operate in real-time. For example, a single binaural free field rendering can roughly calculate up to 20 paths in real-time on a modern PC, but for room acoustics with long reverberation times, a maximum of 6 sources and one listener is realistic (plus the necessity to simulate the sound propagation filters remotely). If reproduction of the rendered audio stream also requires intensive processing power, the numbers go further down.
However, this comes at a price: VA is not trading computational resources over update rates. Advantage is taken by improvement in the general purpose processing power available at present as well as more efficient software libraries. Limitations are solely imposed by the provided processing capacity, and not by the framework. Therefore, VA will plainly result in audio dropouts or complete silence, if the computational power is not sufficient for rendering and reproducing the given scene with the configuration used. Simply put, if you request too much, VA will stop auralizing correctly. Usually, the number of paths between a sound source and a sound receiver that can effectively be processed can be reduced to an amount where the system can operate in real-time. For example, a single binaural free field rendering can calculate roughly up to 20 paths in real-time on a modern PC, but for room acoustics with long reverberation times, a maximum of 6 sources and one listener is realistic (plus the necessity to simulate the sound propagation filters remotely). If reproduction of the rendered audio stream also requires intensive processing power, the numbers go further down.
</p>
<h4>Why is VA a framework?</h4>
......@@ -244,7 +244,7 @@ However, this comes at a price: VA is not trading computational resources over u
<td>BinauralAmbisonicsMixdown</td>
<td>Ambisonics any order</td>
<td>binaural two-channel</td>
<td>Calculates and applies gains for a loudspeaker setup using Higher Order Ambisonics methods.</td>
<td>Calculates and applies gains for a loudspeaker setup using Higher Order Ambisonics methods, then spatialises the directions of the loudspeakers using HRTFs.</td>
</tr>
</tbody>
<tfoot>
......@@ -299,9 +299,9 @@ hearingboth_pc_my_data = D:/experiments/BA/student54/3AFCTest/InputData</code></
<h4>Controlling VA</h4>
<p>
The first question is: what kind of software do you usually use? There are bindings that make VA interfaces available in <b>Matlab</b>, <b>Python</b>, <b>Lua</b> and rudimentary functionality in <b>C#</b>. While many in the acoustics research area prefer Matlab, Python and especially the combination with Jupyter notebook is the open-source way to conveniently use VA. C# is your choice if you are planning to use <a href="start.html#unity">VA for Unity environments</a>, which is probably the lightest entry for those who are not familiar with either Matlab or Python scripting.
The first question is: what kind of software do you usually use? There are bindings that make VA interfaces available in <b>Matlab</b>, <b>Python</b>, <b>Lua</b> and rudimentary functionality in <b>C#</b>. While many in the acoustics research area prefer Matlab, it is Python and especially the combination with Jupyter notebook that the open-source community will find convenient to use with VA. C# is your choice if you are planning to use <a href="start.html#unity">VA for Unity environments</a>, which is probably the lightest entry for those who are not familiar with either Matlab or Python scripting.
<br /><br />
Let's create a simple example scene for a binaural rendering. It requires a running VA server application on the same PC.<br />
Let's create a simple example scene for a binaural rendering. It requires a running VA server application on the same PC (if you have <a href="download.html">extracted the Windows binaries</a>, double-click on the <code>run_VAServer.bat</code> file in the root folder of VA).<br />
</p>
<h5>Matlab</h5>
......@@ -444,15 +444,15 @@ int main( int, char** )
While the term <i>sound source</i> is self explanatory, VA uses the term <i>sound receiver</i> instead of listener.
The reason is, that listeners would reduce the receiving entity to living creatures, while in VA those <i>listeners</i> can also be virtual microphones or have a completely different meaning in other contexts. <br />
<i>Sound portals are</i> entities that pick up sound and transport, transform and/or propagate it to other portals or sound receivers. This concept is helpful for sound transmission handling in Geometrical Acoustics, for example if a door acts as a transmitting object between two rooms.<br />
It depends on the rendering module you use, but portals are mostly relevant in combination with gemoetry, say for room acoustics.
It depends on the rendering module you use, but portals are mostly relevant in combination with geometry, say for room acoustics.
</p>
<h4>Auralization mode</h4>
<p>
Making acoustic effects audible is one of the central aspects of auralization. For research and demonstration purposes, it is helpful to switch certain acoustic phenomena on and off in a fraction of a second. This way, influences can be investigated intuitively.<br />
VA provides a set of phenomena that can be toggled, and they are called auralization modes. Auralization modes can be controlled globally and for each sound sources and sound receiver individually. If a respective renderer consider the given auralization mode, the corresponding processing will be enabled or disabled based on the logical AND combination of the auralization modes (only if auralization modes of source, receiver AND global settings are positive, the phenomenon will be made audible).
VA provides a set of phenomena that can be toggled, and they are called auralization modes. Auralization modes can be controlled globally and for each sound sources and sound receiver individually. If a respective renderer considers the given auralization mode, the corresponding processing will be enabled or disabled based on the logical AND combination of the auralization modes (only if auralization modes of source, receiver AND global settings are positive, the phenomenon will be made audible).
<br />
Most of the auralization modes are only effective for certain rendering modules and are meaningless for other. For example a free field renderer will only expose direct sound, source directivity and doppler effect changes. All other phenomena are dismissed.
Most of the auralization modes are only effective for certain rendering modules and are meaningless for others. For example a free field renderer will only expose direct sound, source directivity and doppler effect changes. All other phenomena are dismissed.
</p>
<div class="table-wrapper">
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment