Preps for website update and v2019a upgrade

parent e79f908a
Copyright 2015-2018 Institute of Technical Acoustics (ITA), RWTH Aachen University
Copyright 2015-2019 Institute of Technical Acoustics (ITA), RWTH Aachen University
Creative Commons Attribution 3.0 Unported
http://creativecommons.org/licenses/by/3.0/
......
......@@ -15,7 +15,7 @@ The Institute of Technical Acoustics (ITA) at RWTH Aachen University is responsi
### License
Copyright 2015-2018 Institute of Technical Acoustics (ITA), RWTH Aachen University.
Copyright 2015-2019 Institute of Technical Acoustics (ITA), RWTH Aachen University.
Creative Commons Attribution 4.0 Unported
http://creativecommons.org/licenses/by/4.0/
......
@misc{ita2018va,
@misc{ita2019va,
author = "{Institute of Technical Acoustics, RWTH Aachen University}",
keywords = {auralization,real-time signal processing,virtual reality},
title = {{Virtual Acoustics - A real-time auralization framework for scientific research}},
howpublished = {\url{http://www.virtualacoustics.org/}},
note = {Accessed on 2018-03-16},
year = {2018},
note = {Accessed on 2019-07-14},
year = {2019},
}
TY - ICOMM
T1 - Virtual Acoustics - A real-time auralization framework for scientific research
A1 - Institute of Technical Acoustics, RWTH Aachen University
Y1 - 2018///
Y1 - 2019///
KW - auralization
KW - real-time signal processing
KW - virtual reality
N1 - Accessed on 2018-03-16
N1 - Accessed on 2019-07-14
ER -
<?xml version="1.0" encoding="UTF-8"?><xml><records><record><database name="Virtual_Acoustics.enl" path="Virtual_Acoustics.enl">Virtual_Acoustics.enl</database><ref-type name="Web Page">16</ref-type><contributors><authors><author>Institute of Technical Acoustics, RWTH Aachen University</author></authors></contributors><titles><title>Virtual Acoustics - A real-time auralization framework for scientific research</title></titles><periodical/><keywords><keyword>auralization</keyword><keyword>real-time signal processing</keyword><keyword>virtual reality</keyword></keywords><dates><year>2018</year></dates><notes>Accessed on 2018-03-16</notes><research-notes>Accessed on 2018-03-16</research-notes><urls/></record></records></xml>
<?xml version="1.0" encoding="UTF-8"?><xml><records><record><database name="Virtual_Acoustics.enl" path="Virtual_Acoustics.enl">Virtual_Acoustics.enl</database><ref-type name="Web Page">16</ref-type><contributors><authors><author>Institute of Technical Acoustics, RWTH Aachen University</author></authors></contributors><titles><title>Virtual Acoustics - A real-time auralization framework for scientific research</title></titles><periodical/><keywords><keyword>auralization</keyword><keyword>real-time signal processing</keyword><keyword>virtual reality</keyword></keywords><dates><year>2019</year></dates><notes>Accessed on 2019-07-14</notes><research-notes>Accessed on 2019-07-14</research-notes><urls/></record></records></xml>
......@@ -9,7 +9,7 @@
\maketitle
\noindent Virtual Acoustics (VA) is a real-time auralization framework for scientific research~\cite{ita2018va}.\\
\noindent Virtual Acoustics (VA) is a real-time auralization framework for scientific research~\cite{ita2019va}.\\
\bibliographystyle{abbrv}
\bibliography{Virtual_Acoustics}
......
......@@ -19,59 +19,51 @@
<!-- Header -->
<header id="header">
<h1 id="logo"><a href="index.html">Start</a></h1>
<h1 id="logo"><a href="index.html">Home</a></h1>
<nav id="nav">
<ul>
<li><a href="overview.html">Overview</a></li>
<li><a href="download.html">Download</a></li>
<li><a href="documentation.html">Documentation</a>
<ul>
<li><a href="documentation.html#configuration">Configuration</a></li>
<li><a href="documentation.html#control">Control</a></li>
<li><a href="documentation.html#scene_handling">Scene handling</a></li>
<li><a href="documentation.html#rendering">Audio rendering</a></li>
<li><a href="documentation.html#reproduction">Audio reproduction</a></li>
<li><a href="documentation.html#tracking">Tracking</a></li>
<li><a href="documentation.html#simulation_recording">Simulation and recording</a></li>
<li><a href="documentation.html#examples">Examples</a></li>
</ul>
</li>
<li>
<a href="#">Quick access</a>
<a href="support.html">Support</a>
<ul>
<li><a href="overview.html">Overview</a></li>
<li><a href="download.html">Download</a></li>
<li><a href="documentation.html">Documentation</a></li>
<li>
<a href="start.html">Getting started</a>
<ul>
<li><a href="start.html#configuration">Configuration</a></li>
<li><a href="start.html#control">Control</a></li>
<li><a href="start.html#scene_handling">Scene handling</a></li>
<li><a href="start.html#rendering">Audio rendering</a></li>
<li><a href="start.html#reproduction">Audio reproduction</a></li>
<li><a href="start.html#tracking">Tracking</a></li>
<li><a href="start.html#simulation_recording">Simulation and recording</a></li>
<li><a href="start.html#examples">Examples</a></li>
</ul>
</li>
<li>
<a href="help.html">Get help</a>
<ul>
<li><a href="help.html#faq">FAQ</a></li>
<li><a href="help.html#issue_tracker">Issue tracker</a></li>
<li><a href="help.html#community">Community</a></li>
<li><a href="help.html#nosupport">No support</a></li>
</ul>
</li>
<li>
<a href="developers.html">Developers</a>
<ul>
<li><a href="developers.html#api">C++ API</a></li>
<li><a href="developers.html#dependencies">Dependencies</a></li>
<li><a href="developers.html#configuration">Configuration</a></li>
<li><a href="developers.html#build_guide">Build guide</a></li>
<li><a href="developers.html#repositories">Repositories</a></li>
</ul>
</li>
<li>
<a href="research.html">Research</a>
<ul>
<li><a href="research.html#system">System papers</a></li>
<li><a href="research.html#technology">Technology papers</a></li>
<li><a href="research.html#applied">Applied papers</a></li>
</ul>
</li>
<li><a href="support.html#faq">FAQ</a></li>
<li><a href="support.html#issue_tracker">Issue tracker</a></li>
<li><a href="support.html#community">Community</a></li>
<li><a href="support.html#nosupport">No support</a></li>
</ul>
</li>
<li>
<a href="developers.html">Developers</a>
<ul>
<li><a href="developers.html#api">C++ API</a></li>
<li><a href="developers.html#dependencies">Dependencies</a></li>
<li><a href="developers.html#configuration">Configuration</a></li>
<li><a href="developers.html#build_guide">Build guide</a></li>
<li><a href="developers.html#repositories">Repositories</a></li>
</ul>
</li>
<li>
<a href="research.html">Research</a>
<ul>
<li><a href="research.html#system">System papers</a></li>
<li><a href="research.html#technology">Technology papers</a></li>
<li><a href="research.html#applied">Applied papers</a></li>
</ul>
</li>
<li><a href="legal.html">Legal notice</a></li>
<!--<li><a href="#" class="button special">Sign Up</a></li>-->
</ul>
</nav>
</header>
......@@ -107,12 +99,13 @@
<p>&nbsp;</p>
<h3>C++ code documentation</h3>
<p>
Virtual Acoustics is entirely written in C++. However some bindings for other languages like Python and Matlab exist. They have been brought into existence in the spirit of making VA available for a broader user base that is not familiar with C++ programming. For those who are not familiar with C++, see <a href="start.html">getting started section</a>). If you want to embrace VA in your software (instead of only controlling it from the outside), here is what you need to know:<br />
Virtual Acoustics is entirely written in C++. However some bindings for other languages like Python and Matlab exist. They have been brought into existence in the spirit of making VA available for a broader user base that is not familiar with C++ programming. For those who are not familiar with C++, see <a href="documentation.html">getting started section</a>). If you want to embrace VA in your software (instead of only controlling it from the outside), here is what you need to know:<br />
</p>
<h4>C++ API documentation (generated with Doxygen)</h4>
<ul>
<a href="resources/v2018b/doc/html/index.html">Virtual Acoustics C++ API v2018b</a> (current version)<br />
<a href="resources/v2019a/doc/html/index.html">Virtual Acoustics C++ API v2019a</a> (current version)<br />
<a href="resources/v2018b/doc/html/index.html">Virtual Acoustics C++ API v2018b</a><br />
<a href="resources/v2018a/doc/html/index.html">Virtual Acoustics C++ API v2018a</a><br />
</ul>
</section>
......@@ -954,7 +947,7 @@ ITA_FFT_WITH_FFTW3 (recommended)</code></pre>
<h4>Get help</h4>
<p>
If you are stuck, get help from the <a href="help.html#community">community</a>.
If you are stuck, get help from the <a href="support.html#community">community</a>.
</p>
</section>
......
......@@ -19,59 +19,51 @@
<!-- Header -->
<header id="header">
<h1 id="logo"><a href="index.html">Start</a></h1>
<h1 id="logo"><a href="index.html">Home</a></h1>
<nav id="nav">
<ul>
<li><a href="overview.html">Overview</a></li>
<li><a href="download.html">Download</a></li>
<li><a href="documentation.html">Documentation</a>
<ul>
<li><a href="documentation.html#configuration">Configuration</a></li>
<li><a href="documentation.html#control">Control</a></li>
<li><a href="documentation.html#scene_handling">Scene handling</a></li>
<li><a href="documentation.html#rendering">Audio rendering</a></li>
<li><a href="documentation.html#reproduction">Audio reproduction</a></li>
<li><a href="documentation.html#tracking">Tracking</a></li>
<li><a href="documentation.html#simulation_recording">Simulation and recording</a></li>
<li><a href="documentation.html#examples">Examples</a></li>
</ul>
</li>
<li>
<a href="support.html">Support</a>
<ul>
<li><a href="support.html#faq">FAQ</a></li>
<li><a href="support.html#issue_tracker">Issue tracker</a></li>
<li><a href="support.html#community">Community</a></li>
<li><a href="support.html#nosupport">No support</a></li>
</ul>
</li>
<li>
<a href="developers.html">Developers</a>
<ul>
<li><a href="developers.html#api">C++ API</a></li>
<li><a href="developers.html#dependencies">Dependencies</a></li>
<li><a href="developers.html#configuration">Configuration</a></li>
<li><a href="developers.html#build_guide">Build guide</a></li>
<li><a href="developers.html#repositories">Repositories</a></li>
</ul>
</li>
<li>
<a href="#">Quick access</a>
<a href="research.html">Research</a>
<ul>
<li><a href="overview.html">Overview</a></li>
<li><a href="download.html">Download</a></li>
<li><a href="documentation.html">Documentation</a></li>
<li>
<a href="start.html">Getting started</a>
<ul>
<li><a href="start.html#configuration">Configuration</a></li>
<li><a href="start.html#control">Control</a></li>
<li><a href="start.html#scene_handling">Scene handling</a></li>
<li><a href="start.html#rendering">Audio rendering</a></li>
<li><a href="start.html#reproduction">Audio reproduction</a></li>
<li><a href="start.html#tracking">Tracking</a></li>
<li><a href="start.html#simulation_recording">Simulation and recording</a></li>
<li><a href="start.html#examples">Examples</a></li>
</ul>
</li>
<li>
<a href="help.html">Get help</a>
<ul>
<li><a href="help.html#faq">FAQ</a></li>
<li><a href="help.html#issue_tracker">Issue tracker</a></li>
<li><a href="help.html#community">Community</a></li>
<li><a href="help.html#nosupport">No support</a></li>
</ul>
</li>
<li>
<a href="developers.html">Developers</a>
<ul>
<li><a href="developers.html#api">C++ API</a></li>
<li><a href="developers.html#dependencies">Dependencies</a></li>
<li><a href="developers.html#configuration">Configuration</a></li>
<li><a href="developers.html#build_guide">Build guide</a></li>
<li><a href="developers.html#repositories">Repositories</a></li>
</ul>
</li>
<li>
<a href="research.html">Research</a>
<ul>
<li><a href="research.html#system">System papers</a></li>
<li><a href="research.html#technology">Technology papers</a></li>
<li><a href="research.html#applied">Applied papers</a></li>
</ul>
</li>
<li><a href="research.html#system">System papers</a></li>
<li><a href="research.html#technology">Technology papers</a></li>
<li><a href="research.html#applied">Applied papers</a></li>
</ul>
</li>
<li><a href="legal.html">Legal notice</a></li>
<!--<li><a href="#" class="button special">Sign Up</a></li>-->
</ul>
</nav>
</header>
......@@ -81,70 +73,1427 @@
<div id="main" class="wrapper style1">
<div class="container">
<header class="major">
<h2>Documentation section</h2>
<p>Principles and design of the Virtual Acoustics framework <br />
<h2>Documentation</h2>
<p>Auralization with Virtual Acoustics <br />
<span style="font-size: 0.6em">This content is available under <a href="http://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></span>
</p>
</header>
<!-- Content -->
<section id="preface">
<h3>Preface</h3>
<p>Virtual Acoustics is a powerful tool for the auralization of virtual acoustic scenes and the reproduction thereof. Getting started with VA includes three important steps
<p>
<strong><ul><li>Configuring the application</li><li>Controlling the core</li><li>Setting up a scene</li></ul></strong>
</p>
The overall design goal aimed at keeping things as simple as possible. However, certain circumstances do not allow further simplicity due to their complexity by nature. VA addresses professionals and is mainly used by scientists. Important features are never traded for convenience if the system's integrity is at stake. Hence, getting everything out of VA will require profound understanding of the technologies involved. It is designed to offer highest flexibility which comes at the price of a demanding configuration. At the beginning, configuring VA is not trivial especially if a loudspeaker-based audio reproduction shall be used. <br /><br />
The usage of VA can often be divided into two user groups
<strong><ul><li>those who seek for quick experiments with spatial audio and are happy with conventional playback over headphones</li>
<li>those who want to employ VA for a sophisticated loudspeaker setup for (multi modal) listening experiments and Virtual Reality applications</li></ul></strong>
</p>
For the first group of users, there are some simple setups that will already suffice for most of the things you aspire. Such setups include, for example, a configuration for binaural audio rendering over a non-equalized off-the-shelf pair of headphones. Another configuration example contains a self-crafted interactive rendering application that exchanges pre-recorded or simulated FIR filters using Matlab or Python scripts for different purposes such as room acoustic simulations, building acoustics, A/B live switching tests to assess the influence of equalization. The configuration effort is minimal and works out of the box if you use the Redstart applications or start a VA command line server with the corresponding core configuration file. If you consider yourself as part of this group of users skip the configuration part and <a href="#examples">have a look at the examples</a>. Thereafter, read the <a href="#control"> control section</a> and the <a href="#scene_handling">scene handling section</a>. Additional examples are provided by the <a href="http://www.ita-toolbox.org/">ITA Toolbox</a> (see folder <...>\applications\VirtualAcoustics\VA).<br />
<br />
If you are willing to dive deeper into the VA framework you are probably interested in how to adapt the software package for your purposes. The following sections will describe how you can set up VA for your goal from the very beginning.
</p>
</section>
<hr />
<section id="configuration">
<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
<!--<a href="#" class="image fit"><img src="images/pic05.jpg" alt="" /></a>-->
<h3>Virtual Acoustics configuration</h3>
<p>VA can be configured using a section-based key-value parameter collection which is passed on to the core instance during initialization. This is usually done by providing a path to a text-based INI file which will be referred to as <code>VACore.ini</code> but can be of arbitrary name. If you use the <code>VAServer</code> application you will work with this file only. If you only use the <code>Redstart</code> GUI application you will probably never use it. However, the INI file can be exported from a Redstart session in case you need it.
</p>
<h3>Preface</h3>
<p>The documentation section offers a deeper insight into the principles of our approach, the design concept and implementation details. It is intended to address scientists who are looking for a profound description of VA, audio professionals who are interested in the details of implementation and software developers who are evaluating the deployment of VA for their cause.
<h4>Basic configuration</h4>
<h5>Paths</h5>
<p>The <code>Paths</code> section allows for adding search paths to the core. If resources like head-related transfer functions (HRTFs), geometry files, or audio files, are required these search paths guarantee to locate the requested files. Relative paths are resolved from the execution folder where the VA server application is started from. When using the provided batch start scripts on Windows it is recommended to add <code>data</code> and <code>conf</code> folders.</p>
<p>
<pre><code>[Paths]
data = data
conf = conf
my_data = C:/Users/Me/Documents/AuralizationData
my_other_data = /home/me/auralization/input
</code></pre>
</p>
<h5>Files</h5>
<p>In the <code>Files</code> section, you can name files that will be included as further configuration files. This is helpful when certain configuration sections must be <i>outsourced</i> to be reused efficiently. Outsourcing is especially convenient when switching between static sections like hardware descriptions for laboratories or setups, but can also be used for rendering and reproduction modules (see below). Avoid copying larger configuration sections that are re-used frequently. Use different configuration files, instead.
<p>
<pre><code>[Files]
old_lab = VASetup.OldLab.Loudspeakers.ini
#new_lab = VASetup.NewLab.Loudspeakers.ini
</code></pre>
</p>
<h5>Macros</h5>
<p>The <code>Macros</code> section is helpful to write tidy scripts. Use macros if it is not explicitly required to use a specific input file. For example, if any HRTF can be used for a receiver in the virtual scene the <code>DefaultHRIR</code> will point to the default HRTF data set, or head-related impulse response (HRIR) in time domain. Any defined macros will be replaced through a given value by the core.<br />
Usage: "$(MyMacroName)/file.abc" -> "MyValue/file.abc"<br />
Macros are substituted forwardly by key name order (use with care), and otherwise stay untouched: A = B; C = $(A) -> $(C) is B<br />
The example macros provided below are a good practice set which should be present in a configuration file in order to keep the example scripts valid.<br />
Macros are also very helpful if certain exported file prefixes are desired, e.g., to get better structured file names for input and output recordings.
<p>
<pre><code>[Macros]
DefaultHRIR = HRIR/ITA-Kunstkopf_HRIR_AP11_Pressure_Equalized_3x3_256.v17.ir.daff
HumanDir = Directivity/Singer.v17.ms.daff
Trumpet = Directivity/Trumpet1.v17.ms.daff
# Define some other macros (examples)
ProjectName = MyVirtualAcousticsProject
</code></pre>
</p>
<h5>Debug</h5>
<p>The <code>Debug</code> section configures the initial behavior of the core as, for example, log level and input/output recording. If input and output recording is enabled the entire channel number of your physical or abstract device will be logged. For devices with a lot of digital inputs and outputs, the channel count may reach up to 256 channels, the maximum channel number as defined per WAV format. Additionally, the data is stored as PCM data at a resolution of 32 bit leading to high storage requirements. To avoid such excessive storage demands, only use this option if absolutely necessary. Otherwise it is recommended to only record the output channels which were set, for example, in the playback modules (see below).<br />
In the following, some macros are used (see Macros section above).
<p>
<pre><code>[Debug]
# Record device input and store to hard drive (will record every available input channel)
InputRecordEnabled = false
InputRecordFilePath = $(ProjectName)_in.wav
# Record device output and store to hard drive (will record every available output channel)
OutputRecordEnabled = false
OutputRecordFilePath = $(ProjectName)_out.wav
# Set log level: 0 = quiet; 1 = errors; 2 = warnings (default); 3 = info; 4 = verbose; 5 = trace;
LogLevel = 3
</code></pre>
</p>
<h4>Calibration</h4>
<p>
To properly calibrate a rendering and reproduction system, every component in the chain has to be carefully configured. Hence the lack of being scaled by physical means, digital signals stored, for example, in a WAV file or in the buffers of the sound card, a reference point enabling a proper calibration was set. In VA, a digital value of 1.0 refers to 1 Pascal at a distance of 1 m per default. For example, a sine wave with peak value of \sqrt(2)) will retain 94 dB SPL at a distance of 1m. But this value can also be changed to <b>124 dB</b> if lower amplitudes are necessary (and a sample type conversion from float to integer is performed along the output chain). This makes it necessary to use a powerful amplifier facilitating the reproduction of small sample values. Setting the internal conversion value to 124 dB avoids clipping at high values (but introduces a higher noise floor). To do so, include the following section into the configuration (the clarification comment can be dropped):
</p>
<p>
<pre><code>[Calibration]
# The amplitude calibration mode either sets the internal conversion from
# sound pressure to an electrical or digital amplitude signal (audio stream)
# to 94dB (default) or to 124dB. The rendering modules will use this calibration
# mode to calculate from physical values to an amplitude that can be forwarded
# to the reproduction modules. If a reproduction module operates in calibrated
# mode, the resulting physical sound pressure at receiver location can be maintained.
DefaultAmplitudeCalibrationMode = 94dB
</code></pre>
</p>
<h4>Audio interface configuration</h4>
<p>
The audio interface controls the backend driver and the device. In the current version, for the <code>Driver</code> backend key, <code>ASIO</code> is supported on Windows only, whereas <code>Portaudio</code> is available on all platforms. By default, Portaudio with the default driver is used that usually produces audible sound without further ado. However, the block sizes are high and the update rates are not sufficient for real-time auralization using motion tracking. Therefore, dedicated hardware and small block sizes should be used - and ASIO is recommended for Windows platforms.
</p>
<h5>ASIO example using ASIO4ALL v2</h5>
<p>
<a href="http://www.asio4all.de" target="_blank">ASIO4ALL</a> is a useful and well-implemented intermediate layer for audio I/O making it possible to use ASIO drivers for the internal hardware (and any other audio device available). It must be installed on the PC, first.
<pre><code>[Audio driver]
Driver = ASIO
Samplerate = 44100
Buffersize = AUTO
Device = ASIO4ALL v2
</code></pre>
Although it appears that the buffer size can be defined for ASIO devices, the ASIO backend will automatically detect the buffer size that has been configured by the driver when the <code>AUTO</code> value is set (recommended). Set the buffer size in the ASIO driver dialog of your physical device, instead. Make sure, that the sampling rates are matching.<br />
ASIO requires a device name to be defined by each driver host. Further common hardware device names are
</p>
<div class="table-wrapper">
<table class="alt">
<thead>
<tr>
<th width="16%">Manufacturer</th>
<th width="32%">Device</th>
<th>ASIO device name</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>RME</b></td>
<td><i>Hammerfall DSP</i></td>
<td><code>ASIO Hammerfall DSP</code></td>
</tr>
<tr>
<td><b>RME</b></td>
<td><i>Fireface USB</i></td>
<td><code>ASIO Fireface USB</code></td>
</tr>
<tr>
<td><b>RME</b></td>
<td><i>MADIFace USB</i></td>
<td><code>ASIO MADIface USB</code></td>
</tr>
<tr>
<td><b>Focusrite</b></td>
<td><i>2i2, 2i4, ...</i></td>
<td><code>Focusrite USB 2.0 Audio Driver</code> or <code>Focusrite USB ASIO</code></td>
</tr>
<tr>
<td><b>M-Audio</b></td>
<td><i>Fast Track Ultra</i></td>
<td><code>M-Audio Fast Track Ultra ASIO</code></td>
</tr>
<tr>
<td><b>Steinberg</b></td>
<td><i>6UR22 MK2</i></td>
<td><code>Yamaha Steinberg USB ASIO</code></td>
</tr>
<tr>
<td><b>Realtek</b></td>
<td><i>Realtek Audio HD</i></td>
<td><code>Realtek ASIO</code></td>
</tr>
<tr>
<td><b>Zoom</b></td>
<td><i>H6</i></td>
<td><code>ZOOM H and F Series ASIO</code></td>
</tr>
<tr>
<td><b>ASIO4ALL</b></td>
<td><i>any windows device</i></td>
<td><code>ASIO4ALL v2</code></td>
</tr>
<tr>
<td><b>Reaper (x64)</b></td>
<td><i>any Reaper device</i></td>
<td><code>ReaRoute ASIO (x64)<br /></code></td>
</tr>
</tbody>
<tfoot>
<tr>
<td colspan="3">Table 1: Common ASIO device driver host names</td>
</tr>
</tfoot>
</table>
</div>
<p>
If you do not have any latency requirements you can also use <code>Portaudio</code> under Windows and other platforms. The specific device names of Portaudio interfaces can be detected, for example, using the VLC player or with Audacity. But the <code>default</code> device is recommended simply because it will pick the audio device that is also registered as the default device of your system. This is, what most people need anyway, and the system tools can be used to change the output device.<br />
If the <code>Buffersize</code> is unkown, at least the native buffer size of the audio device should be used (which is most likely <code>1024</code> for on-board chips). Otherwise, timing will behave oddly which has a negative side effect on the rendering.
<pre><code>[Audio driver]
Driver = Portaudio
Samplerate = 44100
Buffersize = 1024
Device = default
</code></pre>
</p>
<h4>Audio hardware configuration</h4>
<p>The <code>Setup</code> section describes the hardware environment in detail. It might seem a bit over the top but the complex definition of hardware groups with logical and physical layers eases re-using of physical devices for special setups and also allows for multiple assignments - similar to the RME matrix concept of TotalMix, except that volume control and mute toggling can be manipulated in real-time using the VA interface instead of the ASIO control panel GUI.<br />
The hardware configuration can be separated into inputs and outputs, but they are basically handled in the same manner. More importantly, the setup can be devided into <strong>devices of specialized types</strong> and <strong>groups that combine devices</strong>. Often, this concept is unnecessary and appears cumbersome, but there are situations where this level of complexity is required.<br />
A <strong>device</strong> is a physical emmitter (<code>OutputDevice</code>) or transducer (<code>InputDevice</code>) with a fixed number of channels and assignment using (arbitrary but unique) channel indices. A broadband loudspeaker with one line input is a typical representative of the single channel <code>LS</code> type <code>OutputDevice</code>that has a fixed pose in space. A pair of headphones is assigned the type <code>HP</code> and usually has two channels, but no fixed pose in space.<br />
So far, there is only an input device type called <code>MIC</code> that has a single channel.
<br /><br />
Physical devices can not directly be used for a playback in VA. A reproduction module can rather be connected with one or many <code>Outputs</code> - logical groups of <code>OutputDevices</code>.<br />
Again, for headphones this seems useless because a headphone device will be represented by a virtual group of only one device. However, for loudspeaker setups this makes sense as, for example, a setup of 7 loudspeakers for spatial reproduction may be used by different groups which combine only 5, 4, 3, or 2 of the available loudspeakers to form an output group. In this case, only the loudspeaker identifiers are required and channels and positions are made available by the physical device description. Following this strategy, repositioning of loudspeakers and re-assignment of channel indices is less error prone due to its organization in one configuration section, only.
</p>
<h5>Headphone setup example</h5>
<p>
Let us assume you have a pair of Sennheiser HD 650 headphones at your disposal and you want to use it for binaural rendering and reproduction. This is the most common application of VA and will result in the following configuration:
<pre><code>[Setup]
[OutputDevice:SennheiserHD650]
Type = HP
Description = Sennheiser HD 650 headphone hardware device
Channels = 1,2
[Output:DesktopHP]
Description = Desktop user with headphones
Devices = SennheiserHD650
</code></pre>
If you want to use another output jack for some reason change your channels accordingly, say to <code>3,4</code>.
</p>
<h5>Loudspeaker setup example</h5>
<p>
Let us assume you have a square-shaped loudspeaker setup of Neumann KH120 at your disposal. You want to use it for binaural rendering and reproduction. This is the a common application of VA for a dynamic listening experiment in a hearing booth. For this scenario, the configuration file may like this:
<pre><code>[Setup]
[OutputDevice:NeumannKH120_FL]
Type = LS
Description = Neumann KH 120 in front left corner of square
Channels = 1
[OutputDevice:NeumannKH120_FR]
Type = LS
Description = Neumann KH 120 in front right corner of square
Channels = 2
[OutputDevice:NeumannKH120_RR]
Type = LS
Description = Neumann KH 120 in rear right corner of square
Channels = 3
[OutputDevice:NeumannKH120_RL]
Type = LS
Description = Neumann KH 120 in rear left corner of square
Channels = 4
[Output:HearingBoothLabLS]
Description = Hearing booth laboratory loudspeaker setup
Devices = NeumannKH120_FL, NeumannKH120_FR, NeumannKH120_RR, NeumannKH120_RL
</code></pre>
Note: The order of devices in the output group is irrelevant for the final result. Each LS will receive the corresponding signal on the channel of the device.
</p>
<h5>Microphone setup example</h5>
<p>
For all others, the <a href="overview.html">overview page</a> and the <a href="start.html">getting started section</a> should be first point of contact.
The audio input configuration is similar to the output configuration but is not yet fully included in VA. If you want to use input channels as signal sources for a virtual sound source assign the provided unmanaged signals called <code>audioinput1, audioinput2, ... </code>. The number refers to the input channel index beginning with 1 and you can get the signals by using the getters <code>GetSignalSourceInfos</code> or <code>GetSignalSourceIDs</code>.
<pre><code>[Setup]
[InputDevice:NeumannTLM170]
Type = MIC
Description = Neumann TLM 170
Channels = 1
[Input:BodyMic]
Description = Hearing booth talk back microphone
Devices = NeumannTLM170
</code></pre>
</p>
<h4>Homogeneous medium</h4>
<p>
To override default values concerning the homogeneous medium that is provided by VA, include the following section and modify the values to your needs (the default values are shown here).
</p>
<p>
<pre><code>[HomogeneousMedium]
DefaultSoundSpeed = 344.0 # m/s
DefaultStaticPressure = 101125.0 # [Pa]
DefaultTemperature = 20.0 # [Degree centigrade]
DefaultRelativeHumidity = 20.0 # [Percent]
DefaultShiftSpeed = 0.0, 0.0, 0.0 # 3D vector in m/s</code></pre>
<h4 id="configuration_rendering">Rendering module configuration</h4>
<p>
To instantiate a rendering module, a section with a <code>Renderer:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this rendering instance. If you want to change parameters during execution this identifier is required to call the instance. Although all renderers require some obligatory definitions, a detailed description is necessary for the specific parameter set. For typical renderers, some examples are given below.
</p>
<h5>Required rendering module parameters</h5>
<p>
<pre><code>Class = RENDERING_CLASS
Reproductions = REPRODUCTION_INSTANCE(S)</pre></code>
The rendering class refers to the type of renderer which can be taken from the tables in the <a href="overview.html#rendering">overview</a> section.<br />
The section <code>Reproductions</code> describes how to configure connections to reproduction modules. At least one reproduction module has to be defined but the rendering stream can also be connected to multiple reproductions of same or different type (e.g., talkthrough, equalized headphones and cross-talk cancellation). The only restriction is that the rendering output channel number has to match the reproduction module's input channel number. This prevents connecting a two-channel binaural renderer with, for example, an Ambisonics reproduction which would take at least 4 channels.
</p>
<h5>Optional rendering module parameters</h5>
<p>
<pre><code>Description = Some informative description of this rendering module instance
Enabled = true
OutputDetectorEnabled = false
RecordOutputEnabled = false
RecordOutputFileName = renderer_out.wav
RecordOutputBaseFolder = recordings/MyRenderer
</pre></code>
<blockquote>
Note: until version 2018a, the record output file was only controlled by a file path key named <code>RecordOutputFilePath</code>. The file name and base folder has been introduced in 2018b. Now, the folder and file name can be modified during runtime, see <a href="#simulation_recording">simulation and recording</a> section.
</blockquote>
Rendering modules can be <i>enabled and disabled</i> to speed up setup changes without copying & pasting larger parts of a configuration section, as especially reproduction modules can only be instantiated if the sound card provides enough channels. This makes testing on a desktop PC and switching to a laboratory environment easier.
<br />
For rendering modules, only the <i>output</i> can be observed. A stream detector for the output can be activated that will produce level meter values, for example, for a GUI widget. The output of the active listener can also be recorded and exported as a WAV file. Recording starts with initialization and is exported to the hard disc drive after finalization impliciting that data is kept in the RAM. If a high channel number is required and/or long recording sessions are planned it is recommended to route the output through a DAW, instead, i.e. with ASIO re-routing software devices like Reapers ReaRoute ASIO driver. To include a more versatile output file name (macros are allowed).
</p>
<h5>Binaural free field renderer (class <code>BinauralFreeField</code>) example</h5>
<p>
This example with all available key/value configuration pairs is include in the default <code>VACore.ini</code> settings which is generated from the repository's <code>VACore.ini.proto</code> (by CMake). It requires a reproduction called <code>MyTalkthroughHeadphones</code>, shown further below.
<pre><code>[Renderer:MyBinauralFreeField]
Class = BinauralFreeField
Enabled = true
Reproductions = MyTalkthroughHeadphones
HRIRFilterLength = 256
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.1
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceiver = false
MotionModelLogEstimatedOutputReceiver = false
SwitchingAlgorithm = linear
OutputDetectorEnabled = false
RecordOutputEnabled = false
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav</pre></code>
A more detailed explanation of the motion model and further parameters are provided in the <a href="documentation.html">documentation</a> specifying how the rendering works.
</p>
<h5>VBAP free field renderer (class <code>VBAPFreeField</code>) example</h5>
<p>
Requires <code>Output</code> (3-d positions of a loudspeaker setup) to render channel-based audio. Otherwise, it works similar to other free field renderers.
<pre><code>[Renderer:MyVBAPFreefield]
Class = VBAPFreeField
Enabled = true
Output = VRLab_Horizontal_LS
Reproductions = MixdownHeadphones</pre></code>
</p>
<h5>Ambisonics free field renderer (class <code>AmbisonicsFreeField</code>) example</h5>
<p>
Similar to binaural free field renderer, but evaluates receiver directions based on a decomposition into spherical harmonics with a specific order (<code>TruncationOrder</code>). It requires a reproduction called <code>MyAmbisonicsDecoder</code> which is shown further below.
<pre><code>[Renderer:MyAmbisonicsFreeField]
Class = AmbisonicsFreeField
Enabled = true
Reproductions = MyAmbisonicsDecoder
TruncationOrder = 3
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.1
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceiver = false
MotionModelLogEstimatedOutputReceiver = false
SwitchingAlgorithm = linear
OutputDetectorEnabled = false
RecordOutputEnabled = false
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav</pre></code>
</p>
<h5>Ambient mixing renderer (class <code>AmbientMixer</code>) example</h5>
<p>
The ambient mixer takes the value of the key <code>OutputGroup</code> and accordingly sets the channel count for playback as subsequent reproduction modules require matching channels. However, an arbitrary number of reproduction modules can be specified, as shown in the following example.
<pre><code>[Renderer:MyAmbientMixer]
Class = AmbientMixer
Description = Low-cost renderer to make sound audible without spatializations
Enabled = true
OutputGroup = MyDesktopHP
Reproductions = MyDesktopHP, MySubwooferArray</pre></code>
</p>
<h5>Binaural artificial room acoustics renderer (class <code>BinauralArtificialReverb</code>) example</h5>
<p>
Values and angles are specified in SI units (e.g., seconds, meters, watts, etc.) and angles, respectively. The reverberation time may exceed the reverberation filter length (divided by the sampling rate) resulting in a cropped impulse response. This renderer requires and uses the sound receiver HRIR for spatialization and applies a sound power correction to match with direct sound energy if used together with the binaural free field renderer.
<pre><code>[Renderer:MyBinauralArtificialRoom]
Class = BinauralArtificialReverb
Description = Low-cost per receiver artificial reverberation effect
Enabled = true
Reproductions = MyTalkthroughHeadphones
ReverberationTime = 0.71
RoomVolume = 200
RoomSurfaceArea = 88
MaxReverbFilterLengthSamples = 88200
PositionThreshold = 1.0
AngleThresholdDegree = 30
SoundPowerCorrectionFactor = 0.05
TimeSlotResolution = 0.005
MaxReflectionDensity = 12000.0
ScatteringCoefficient = 0.1</pre></code>
</p>
<h5>Binaural room acoustics renderer (class <code>BinauralRoomAcoustics</code>) example</h5>
<p>
Requires the Room Acoustics for Virtual ENvironments (RAVEN) software module (see <a href="research.html">Research section</a>) or other room acoustics simulation backends. Note that the reverberation time may exceed the reverberation filter length (divided by the sampling rate) with the consequence that the generated impulse response will be cropped. This renderer requires and uses the specified sound receiver HRIR data set for spatialization and applies a sound power correction to match with direct sound energy if combined with binaural free field renderer.
<pre><code>[Renderer:MyBinauralRoomAcoustics]
Class = BinauralRoomAcoustics
Enabled = true
Description = Renderer with room acoustics simulation backend (RAVEN) for a source-receiver-pair with geometry-aware propagation
Reproductions = MyTalkthroughHeadphones