Elaboration on offline simulation configuration

parent 074e8fff
......@@ -414,12 +414,12 @@ DefaultRelativeHumidity = 20.0 # [Percent]
DefaultShiftSpeed = 0.0, 0.0, 0.0 # 3D vector in m/s</code></pre>
<h4>Rendering module configuration</h4>
<p>
To instantiate a rendering module, a section with a <code>Renderer:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this rendering instance. If you want to change parameters during execution this identifier is required to call the instance. Although all renderers require some obligatory definitions, a detailed description is necessary for the specific parameter set. For typical renderers, some examples are given below.
</p>
<h5>Required rendering module parameters</h5>
<p>
<h4 id="configuration_rendering">Rendering module configuration</h4>
<p>
To instantiate a rendering module, a section with a <code>Renderer:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this rendering instance. If you want to change parameters during execution this identifier is required to call the instance. Although all renderers require some obligatory definitions, a detailed description is necessary for the specific parameter set. For typical renderers, some examples are given below.
</p>
<h5>Required rendering module parameters</h5>
<p>
<pre><code>Class = RENDERING_CLASS
Reproductions = REPRODUCTION_INSTANCE(S)</pre></code>
The rendering class refers to the type of renderer which can be taken from the tables in the <a href="overview.html#rendering">overview</a> section.<br />
......@@ -630,10 +630,10 @@ Reproductions = MyTalkthroughHeadphones</pre></code>
<h4>Reproduction module configuration</h4>
<p>
To instantiate a reproduction module, a section with a <code>Reproduction:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this reproduction instance. If you want to change parameters during execution, this identifier is required to call the instance. All reproduction modules require some obligatory definitions but for every specific parameter set, a detailed description is necessary. For typical reproduction modules, some examples are given below.
</p>
<h4 id="configuration_reproduction">Reproduction module configuration</h4>
<p>
To instantiate a reproduction module, a section with a <code>Reproduction:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this reproduction instance. If you want to change parameters during execution, this identifier is required to call the instance. All reproduction modules require some obligatory definitions but for every specific parameter set, a detailed description is necessary. For typical reproduction modules, some examples are given below.
</p>
<h5>Required reproduction module parameters</h5>
<p>
......@@ -1377,15 +1377,52 @@ To connect an HMD, set up a Unity scene and connect the tracked GameObject (usua
<h3>Simulation and recording</h3>
<p>
As already pointed out, VA can be used to record simulated acoustic environments. The only requirement is to activate the output recording flag in the configuration and add a target file path where to store the recordings, as described in the <a href="#rendering">rendering</a> and <a href="#reproduction">reproduction</a> module setup sections. Outputs from the rendering modules can be used to record spatial audio samples (like binaural clips or Ambisonics B-format / HOA tracks). Outputs from reproductions can be used for offline playback over the given loudspeaker setup for (audio-visual) demonstrations or for non-interactive listening experiments.<br /><br />
In VA, two different approaches can be used: a) capturing the real-time audio streams and b) emulate a sound card, process the audio stream offline and capture the output.
As already pointed out, VA can be used to record simulated acoustic environments. The only requirement is to activate the output recording flag in the configuration and add a target file path where to store the recordings, as described in the <a href="#configuration_rendering">rendering</a> and <a href="#configuration_reproduction">reproduction</a> module setup sections. Outputs from the rendering modules can be used to record spatial audio samples (like binaural clips or Ambisonics B-format / HOA tracks). Outputs from reproductions can be used for offline playback over the given loudspeaker setup for (audio-visual) demonstrations or for non-interactive listening experiments.<br /><br />
Two different approaches can be used:
<ul>
a) <strong>capturing the real-time audio streams</strong><br />
b) <strong>emulating a sound card</strong> and <strong>process the audio stream offline</strong>
</ul>
</p>
<h4>Capturing the output of rendering and reproduction modules in real-time</h4>
<p>If you want to capture the real-time output of rendering and reproduction modules, VA is driven by the sound card and live updates like tracking device data or HMD movements are effective. The scenes are updated according to the user interactions or update rates of the tracking device. This approach is helpful for recording simple scenes with synchronized audio-visual content, e.g. when using Unity3D and preparing a video for demonstration purposes.
<p>If you want to capture the real-time output of rendering and reproduction modules, VA is driven by the sound card and live updates - like tracking device data or HMD movements - are effective. The scene is updated according to the user interaction and are bound to the update rate of the tracking device or control loop timeout.<br />
This approach is helpful to record simple scenes with synchronized audio-visual content, e.g. when using <u>Unity3D and preparing a video for demonstration purposes</u>.
</p>
<h4>Capturing the output of rendering and reproduction modules by an emulated virtual audio device (offline mode)</h4>
<p>
For sound field simulations with highly acurate audio rendering, high update rates and small block sizes it is wise to use the offline simulation capability. Therefore, a virtual sound device is used and the audio processing speed is controlled by the user. This allows to change the scene within every audio processing block, no matter how small (in real-time mode, typically 10 times more audio blocks are processed when a single update occurs, for example a translation of the listener triggered by a tracking device). Additionally, there is no hardware constraint as the rendering and reproduction calculations are not bound to deliver real-time updates. This makes it possible to record scenes of arbitrary complexity for the cost of a longer recording and export processing to generate the audio files.
For <u>sound field simulations</u> with high-precision timing for physics-based audio rendering, using a lot of scene adaptions in combination with a very small audio processing block sizes is a wise desision. Therefore, one should switch to the offline simulation capability of VA.<br />
A virtual sound device can be activated that suspends the timeout-driven block processing and gives the control of the audio processing speed into the user's hands. This allows to change the scene without time limits and every audio processing block can be triggered to process the entire new scene, no matter how small the block size (in real-time mode, typically 10 times more audio blocks are processed during a single scene update, for example a translation of the listener triggered by a tracking device). Additionally, there is no hardware constraint as the rendering and reproduction calculations are not bound to deliver real-time update rates. This makes it possible to set up scenes of arbitrary complexity for the cost of a longer calculation of the processing chain including rendering, reproduction, recording and export to generate the audio files.
</p>
<h5>Virtual sound card audio driver configuration</h5>
<p>
To enable the emulated sound card and set it up for the Matlab example script <code>itaVA_example_offline_simulation.m</code> and <code>itaVA_example_offline_simulation_ir.m</code>, modify your configuration as follows
<pre><code>[Audio driver]
Driver = Virtual
Device = Trigger
Samplerate = 44100
Buffersize = 64
Channels = 2
</code></pre>
</p>
<h5>Controlling the virtual card audio processing (<code>Matlab</code> example)</h5>
<p>
To modify the internal timing, set the clock by an increment of the blocks to be processed, e.g.<br />
<pre><code>% Clock increment of 64 samples at a sampling rate of 44.1kHz
manual_clock = manual_clock + 64 / 44100;
va.call_module( 'manualclock', struct( 'time', manual_clock ) );</code></pre>
To trigger a new audio block to be processed, run<br /><br />
<pre><code>% Process audio chain by incrementing one block
va.call_module( 'virtualaudiodevice', struct( 'trigger', true ) );
</code></pre>
<br />
These incrementations are usually called at the end of a simulation processing loop. Any scene change prior to that will be effectively auralized. For example to implement a dynamic room acoustics situation with an animation path, a generic path renderer can be used and a full room acoustics simulation of 10 seconds runtime can be executed prior and the filter exchange, making every simulation step change audible.
</p>
</section>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment