Even more overview

parent 6dbbd28b
......@@ -93,7 +93,7 @@
<div class="container">
<header class="major">
<h2>Is Virtual Acoustics for me?</h2>
<p>Until today, Virtual Acoustics was used for scientific research. It was design for two main purposes: transparent high quality audio rendering for listening experiments and transparent high quality audio rendering for Virtual Reality research. <br /><br />
<p>Until today, Virtual Acoustics was used for scientific research. It was designed for two main purposes: transparent high quality audio rendering for listening experiments and transparent high quality audio rendering for Virtual Reality research. <br /><br />
From now on, VA is made available to the public and can be used for any given purpose.</p>
</header>
</div>
......
......@@ -72,54 +72,6 @@
<h2>Overview</h2>
<p>Virtual Acoustics explained in a few words <br /> <span style="font-size: 0.6em">This content is available under <a href="http://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></span> </p>
</header>
<div class="row 150%">
<div class="4u 12u$(medium)">
<!-- Sidebar -->
<section id="sidebar">
<section>
<h3>Real-time auralization</h3>
<p>VA is a real-time auralization framework.</p>
<footer>
<ul class="actions">
<li><a href="#rtauralization" class="button">Learn More</a></li>
</ul>
</footer>
</section>
<hr />
<section>
<!--<a href="#redstart" class="image fit"><img src="images/pic06.jpg" alt="" /></a>-->
<h3>Audio rendering</h3>
<p>Audio rendering is the process of creating audio streams based on a virtual scene using DSP.</p>
<footer>
<ul class="actions">
<li><a href="#audiorendering" class="button">Learn More</a></li>
</ul>
</footer>
</section>
<section>
<h3>Audio reproduction</h3>
<p>Audio reproduction is the process of playing back audio streams using different techniques.</p>
<footer>
<ul class="actions">
<li><a href="#" class="button">Learn More</a></li>
</ul>
</footer>
</section>
<section>
<h3>Virtual Reality</h3>
<p>For true immersion, the acoustic modality can not be dismissed. VA can help.</p>
<footer>
<ul class="actions">
<li><a href="#" class="button">Learn More</a></li>
</ul>
</footer>
</section>
</section>
</div>
<div class="8u$ 12u$(medium) important(medium)">
<!-- Content -->
<section id="rtauralization">
......@@ -130,8 +82,10 @@
<p>VA creates audible sound from a purely virtual situation. To do so, it uses digital input data that is pre-recorded, measured, modelled or simulated. However, VA creates dynamic auditory worlds that can be interactively endevoured, because it accounts for modifications of the virtual situation. In the simplest case, this means that sound sources and listeners can move freely, and sound is changed accordingly. This real-time auralization approach can only be achieved, if certain parts of the audio processing are updated continuously and fast. We call this audio rendering, and the output is an audio stream that represents the virtual situation. For more complex situations like rooms or outdoor worlds, the sound propagation becomes highly relevant and very complex. VA uses real-time simulation backends or simplified models to create a physics-based auditory impression.<br />
If update rates are undergoing certain perceptive thersholds, this method can be readily used in Virtual Reality applications.
<h4>Latency</h4>
<p>Input-output latency is crucial for any interactive application. VA tries to achieve minimal latency wherever possible, because latency of subsequent components add up. As long as latency is kept low, a human listener will not notice small delays during scene updates, resulting in a convincing live system, where interaction directly leads to the expected effect (without waiting for the system to process).
<h4>Latency, real-time capability and resource management</h4>
<p>Input-output latency is crucial for any interactive application. VA tries to achieve minimal latency wherever possible, because latency of subsequent components add up. As long as latency is kept low, a human listener will not notice small delays during scene updates, resulting in a convincing live system, where interaction directly leads to the expected effect (without waiting for the system to process).<br />
VA tries to achive real-time capability by establishing data management and processing modules that are lightweight and handle updates efficiently. For example, the FIR filtering modules use a partitioned block convolution resulting in update latencies (at least for the early part of filters) within one single audio block - which usually means a couple of milliseconds. Remotely updating long room impulse responses using Matlab can easily hit 1000 Hz update rates, which under normal circumstances is about three times more a block-based streaming sound card provides. And by far more a dedicated graphics rendering processor achieves, which is often the driving part of scene modifications.<br />
However, this comes at a price: VA is not trading computational resources over update rates. And VA will plainly result in audio dropouts or complete silence, if the computational power is not sufficient for rendering and reproducing the given scene. Simply put, if you request too much, VA will stop auralizing. The number of paths between a sound source and a sound receiver that can effectively be processed is limited. For example, a single binaural free field rendering can calculate up to 20 paths in real-time, but for room acoustics with long reverberation times, a maximum of 6 sources and one listener is realistic (requiring the sound propagation simulation to be processed remotely). If reproduction of the renderd audio stream also requires processing power, the numbers go even further down.
</p>
<h4>Why is VA a framework?</h4>
......@@ -150,8 +104,8 @@
<table class="alt">
<thead>
<tr>
<th>Class name</th>
<th>Output stream</th>
<th width="16%">Class name</th>
<th width="16%">Output stream</th>
<th>Description</th>
</tr>
</thead>
......@@ -224,9 +178,9 @@
<table class="alt">
<thead>
<tr>
<th>Class name</th>
<th>Input stream</th>
<th>Output stream</th>
<th width="16%">Class name</th>
<th width="16%">Input stream</th>
<th width="16%">Output stream</th>
<th>Description</th>
</tr>
</thead>
......@@ -288,7 +242,7 @@
</p>
<h5>Configuration using an INI file</h5>
<p>
You can do this by modifying the <code>*.ini</code> files in the <code>conf</code> folder and use the provided batch start scripts, that will start the VA server using these configuration files. The <code>VACore.ini</code> controls the core parameters, the <code>VASetup.*.ini</code> are describing hardware devices and channel layouts. They are included by a line in the <code>[Files]</code> section of the configuration file. Use <code>enabled = true</code> or <code>enabled = false</code> to activate or deactivate instantiation of sections, i.e. rendering or reproductions modules and output groups.
You can do this by modifying the <code>*.ini</code> files in the <code>conf</code> folder and use the provided batch start scripts, that will start the VA server using these configuration files. The <code>VACore.ini</code> controls the core parameters, the <code>VASetup.*.ini</code> are describing hardware devices and channel layouts. They are included by a line in the <code>[Files]</code> section of the configuration file and usually represent a static setup of a laboratory or a special setup of an experiment. Use <code>enabled = true</code> or <code>enabled = false</code> to activate or deactivate instantiation of sections, i.e. rendering or reproduction modules and output groups.
</p>
<h5>Using search paths</h5>
<p>
......@@ -296,30 +250,28 @@
In VA, we struggle a lot with this and it is a serious problem. Often, configurations and input data for scenes are created locally and are later transferred to a computer in the laboratory.
This computer is often not the computer that is also controlling the scene, because a remote network connection is used - which in consequence requires files to be mirrored on the hard drive of that server PC. If no precaution is taken, this usually leads to a nerve-wrecking trial-and-error process until all files are found - and mostly results in using absolute paths as the quick-and-dirty solution because we are all very lazy and to busy to do it right.
<br />
<b>DO IT RIGHT</b> in this context means, <b>NEVER</b> use absolute paths from the absolute beginning of working with VA. VA provides search path functionality. This means, it will find any relative file path with the smallest amount of help: you have to provide one or many base paths where to look for your input files. This makes it the easiest to avoid problems.
<b>DO IT RIGHT</b> in this context means, <b>NEVER use absolute paths</b> in the first place. VA provides search path functionality. This means, it will find any relative file path with the smallest amount of help: you have to provide one or many base paths where to look for your input files.
<br >
<br />
Here is the best practice assuming you want to run a listening experiment:<br />
Put all your input data in one base folder, let's call it<br />
<code>C:/Users/student54/Documents/BachelorThesis/3AFCTest/InputData</code><br />
In your <code>VACore.ini</code>, add a search path to this folder: <br />
<pre><code>[Paths]
<blockquote>
<b>Search path best practice:</b> <br /><br />
Put all your input data in one base folder, let's say <code>C:/Users/student54/Documents/BachelorThesis/3AFCTest/InputData</code> In your <code>VACore.ini</code>, add a search path to this folder: <pre><code>[Paths]
studentlab_pc3_my_data = C:/Users/student54/Documents/BachelorThesis/3AFCTest/InputData</code></pre>
Let us assume you have some subfolders <code>trial1, trial2, ...</code> with WAV files and a HRIR dataset <code>Kemar_individualized.daff</code> in the root folder. You will load them using this pseudo code <br />
Let us assume you have some subfolders <code>trial1, trial2, ...</code> with WAV files and a HRIR dataset <code>Kemar_individualized.v17.ir.daff</code> in the root folder. You will load them using this pseudo code <br />
<pre><code>H = va.CreateDirectivityFromFile( 'Kemar_individualized.daff' )
<pre><code>HRIR_1 = va.CreateDirectivityFromFile( 'Kemar_individualized.v17.ir.daff' )
Sample_1_1 = va.CreateSignalSourceBufferFromFile( 'trial1/sample1.wav' )
Sample_1_2 = va.CreateSignalSourceBufferFromFile( 'trial1/sample2.wav' )
Sample_2_1 = va.CreateSignalSourceBufferFromFile( 'trial2/sample1.wav' )
...</code></pre>
When you now move to another computer in the laboratory (for conducting the listening experiment there), copy the entire <code>InputData</code> folder to the computer, where the <u>VA server</u> will be running. For example <code>D:/experiments/BA/student54/3AFCTest/InputData</code> <br />
Now, all you have to do is add another search path to your <code>VACore.ini</code> configuration file, e.g. <br />
When you now move to another computer in the laboratory (for conducting the listening experiment there), copy the entire <code>InputData</code> folder to the computer, where the <u>VA server</u> will be running. For example to <code>D:/experiments/BA/student54/3AFCTest/InputData</code>. Now, all you have to do is add another search path to your <code>VACore.ini</code> configuration file, e.g. <br />
<pre><code>[Paths]
studentlab_pc3_my_data = C:/Users/student54/Documents/BachelorThesis/3AFCTest/InputData
hearingboth_pc_my_data = D:/experiments/BA/student54/3AFCTest/InputData</code></pre>
... and you have no trouble with paths, anymore. If it is applicable, you can also add search paths over the VA interface during runtime using the <code>AddSearchPath</code> function.
</blockquote>
</p>
<h4>Controlling VA</h4>
......@@ -334,20 +286,20 @@ hearingboth_pc_my_data = D:/experiments/BA/student54/3AFCTest/InputData</code></
va.connect()
va.reset()
va.addSearchPath( pwd )
X = va.createAudiofileSignalSource( 'ita_demosound.wav' )
va.setAudiofileSignalSourcePlaybackAction( X, 'play' )
va.setAudiofileSignalSourceIsLooping( X, true );
X = va.createSignalSourceBufferFromFile( 'ita_demosound.wav' )
va.setSignalSourceBufferPlaybackAction( X, 'play' )
va.setSignalSourceBufferLooping( X, true );
S = va.createSoundSource( 'itaVA_Source' )
va.setSoundSourcePosition( S, [-2 1.7 -2] )
va.setSoundSourcePose( S, [ -2 1.7 -2 ], [ 0 0 0 1 ] )
va.setSoundSourceSignalSource( S, X )
H = va.loadHRIRDataset( 'NeumannKU100.v17.ir.daff' )
H = va.CreateDirectivityFromFile( 'NeumannKU100.v17.ir.daff' )
L = va.createListener( 'itaVA_Listener' )
va.setListenerPosition( L, [0 1.7 0] )
va.setListenerHRIR( L, H )
L = va.createSoundReceiver( 'itaVA_Listener' )
va.setSoundReceiverPose( L, [0 1.7 0], [ 0 0 0 1 ] )
va.setSoundReceiverDirectivity( L, H )
va.setActiveListener( L )
va.disconnect()</code></pre>
......@@ -373,28 +325,120 @@ VA.Disconnect()</code></pre>
<h4>Sound sources, sound receivers and sound portals</h4>
<p>
In VA, you will find three different virtual entities that represent sound objects.
While the term <i>sound source</i> is self explenatory, VA uses the term <i>sound receiver</i> instead of listener.
The reason is, that listeners would reduce the receiving entity to living creatures, while in VA those <i>listeners</i> can also be virtual microphones or have a complete different meaning in other contexts.
Sound portals are entities where sound can be picked up and transported and/or transformed to other portals or sound receivers. This concept is helpful for sound transmission handling in Geometrical Acoustics, for example if a door acts as a transmission object betwen two rooms.
In VA, you will find three different virtual entities that represent sound objects.<br />
While the term <i>sound source</i> is self explanatory, VA uses the term <i>sound receiver</i> instead of listener.
The reason is, that listeners would reduce the receiving entity to living creatures, while in VA those <i>listeners</i> can also be virtual microphones or have a completely different meaning in other contexts. <br />
<i>Sound portals are</i> entities that pick up sound and transport, transform and/or propagate it to other portals or sound receivers. This concept is helpful for sound transmission handling in Geometrical Acoustics, for example if a door acts as a transmitting object between two rooms.<br />
It depends on the rendering module you use, but portals are mostly relevant in combination with gemoetry, say for room acoustics.
</p>
<h4>Auralization mode</h4>
<p>
Making acoustic effects audible is one of the central aspects of auralization. For research and demonstration purposes, it is helpful to switch certain acoustic phenomena on and off in a fraction of a second. This way, influences can be investigated intuitively.<br />
VA provides a set of phenomena that can be toggled, and they are called auralization modes. Auralization modes can be controlled globally and for each sound sources and sound receiver individually. If a respective renderer consider the given auralization mode, the corresponding processing will be enabled or disabled based on the logical AND combination of the auralization modes (only if auralization modes of source, receiver AND global settings are positive, the phenomenon will be made audible).
<br />
Most of the auralization modes are only effective for certain rendering modules and are meaningless for other. For example a free field renderer will only expose direct sound, source directivity and doppler effect changes. All other phenomena are dismissed.
</p>
<div class="table-wrapper">
<table class="alt">
<thead>
<tr>
<th width="16%">Name</th>
<th width="8%">Acronym</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Direct sound</td>
<td>DS</td>
<td>Direct sound path between a sound source and a sound receiver</td>
</tr>
<tr>
<td>Early reflections</td>
<td>ER</td>
<td>Specular reflections off walls, that correspond to early arrival time of a complex source-receiver-pair.</td>
</tr>
<tr>
<td>Diffuse decay</td>
<td>DD</td>
<td>Diffuse decay part of a the arrival time of a complex source-receiver-pair. Mostly used in the context of room acoustics.</td>
</tr>
<tr>
<td>Source directivity</td>
<td>SD</td>
<td>Sound source directivity function, the angle dependent radiation pattern of an emitter.</td>
</tr>
<tr>
<td>Medium absorption</td>
<td>MA</td>
<td>Acoustic energy attenuation due to absorbing capability of the medium</td>
</tr>
<tr>
<td>Temporal variation</td>
<td>TV</td>
<td>Statistics-driven fluctuation of sound resulting from turbulence and time-variance of the medium (the atmosphere).</td>
</tr>
<tr>
<td>Scattering</td>
<td>SC</td>
<td>Diffuse scattering off non-planar surfaces.</td>
</tr>
<tr>
<td>Diffraction</td>
<td>DF</td>
<td>Diffraction off and around obstacles.</td>
</tr>
<tr>
<td>Near field</td>
<td>NF</td>
<td>Acoustic phenomena caused by near field effects (in contrast to far field assumptions).</td>
</tr>
<tr>
<td>Doppler</td>
<td>DP</td>
<td>Doppler frequency shifts based on relative distance changes.</td>
</tr>
<tr>
<td>Spreading loss</td>
<td>SL</td>
<td>Distance dependend spreading loss, i.e. for spherical waves. Also called 1/r-law or (inverse) distance law.</td>
</tr>
<tr>
<td>Transmission</td>
<td>TR</td>
<td>Transmission of sound energy through solid structures like walls and flanking paths.</td>
</tr>
<tr>
<td>Absorption</td>
<td>AB</td>
<td>Sound absorption by material.</td>
</tr>
</tbody>
<tfoot>
<tr>
<td colspan="3">Table 3: currently recognized auralization modes</td>
</tr>
</tfoot>
</table>
</div>
<h4>Signal sources</h4>
<p>
VA differentiates between sound source and signal source. On the one hand we have a sound source, which is an acoustic entity that emits sound. On the other hand we speak of a <i>sound signal</i>, which is the acoustic information emitted by such a sound source. Hence, a sound source is always connected to a signal source. For example, a piano is a sound source, the music played when using its keys is called the <i>source signal</i>, and can vary depending on the piece or interpretation of the artist. <br />
VA provides a set of different signal source types. Most of the time, buffers are used, that are populated with pre-recorded audio samples by loading WAV files from the hard drive. Those buffer signal source can be started, paused and stopped, and the can be set into loop mode.<br />
Apart from buffers, there is also the possibility to connect a microphone input channel from the audio device. More specialized signal sources are speech transformed by text input and machines with a start, idle and stop sound. Finally, you can connect your own signal source by providing a network client that feeds audio samples, or register a signal source using the local interface directly (both in an experimental stage, though).
</p>
<h4>Directivities (including HRTFs and HRIRs)</h4>
<p>
The sound radiation pattern of a sound source is usually described by a directional function that depends on wave length or frequency. This function is generally called <i>directivity</i> and is commonly used in the context of musical instruments. The underlying functions can be of various type. They range from sampled directions on a regular or irregular spherical grid to sets of fundamental functions that are weighted by coefficients, such as spherical harmonics. VA supports individual implementations of those directivities, and it is up to the rendering modules to account for the different types (and for near field effects or distance dependencies).<br />
To maintain a general approach to this topic, sound receivers can be assigned directivities, too. Due to the reciprocal nature of acoustic propagation and the fact that one can model sound transmission by means of linear time-invariant systems for the majority of applications, this approach is equally valid for sound receivers. In the context of binaural technology, a sound receiver translates to a <i>listener</i> and the assigned directivity is called a <i>head-related transfer function</i> or <i>head-related impulse response</i>. The HRTF or HRIR is applied to the sound receiver in the same way a sound source directivity would be used.
</p>
<h4>Geometry meshes and acoustic materials</h4>
<p>
Geometry-aware audio rendering is the holy grail of physics-based real-time geometrical acoustics. It requires sophisticated algorithms and powerful backend processing to achieve real-time capability.
Geometry-aware audio rendering is the holy grail of physics-based real-time auralization using geometrical acoustics simulation. It requires sophisticated algorithms and powerful backend processing to achieve real-time capability.
VA tries to support this by providing a simple geometry mesh class and interfaces to load and transmit geo data. However, it is up to the implementation of the rendering modules what to do with the data.
Faces of meshes are assigned with acoustic materials such as absorption, scattering and transmission coefficients. These are, for example, used (or transformed and forwarded) by special rendering instances, like the binaural room acoustics audio renderer.
</p>
......@@ -426,8 +470,6 @@ VA.Disconnect()</code></pre>
</p>
</section>
</div>
</div>
</div>
</div>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment