start.html 90.3 KB
Newer Older
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<!DOCTYPE HTML>
<!--
	Landed by HTML5 UP
	html5up.net | @ajlkn
	Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
	<head>
		<title>Virtual Acoustics</title>
		<meta charset="utf-8" />
		<meta name="viewport" content="width=device-width, initial-scale=1" />
		<!--[if lte IE 8]><script src="assets/js/ie/html5shiv.js"></script><![endif]-->
		<link rel="stylesheet" href="assets/css/main.css" />
		<!--[if lte IE 9]><link rel="stylesheet" href="assets/css/ie9.css" /><![endif]-->
		<!--[if lte IE 8]><link rel="stylesheet" href="assets/css/ie8.css" /><![endif]-->
	</head>
	<body>
		<div id="page-wrapper">

			<!-- Header -->
				<header id="header">
					<h1 id="logo"><a href="index.html">Start</a></h1>
					<nav id="nav">
						<ul>
							<li>
								<a href="#">Quick access</a>
								<ul>
									<li><a href="overview.html">Overview</a></li>
									<li><a href="download.html">Download</a></li>
									<li><a href="documentation.html">Documentation</a></li>
									<li>
32
										<a href="start.html">Getting started</a>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
33
34
35
36
										<ul>
											<li><a href="start.html#configuration">Configuration</a></li>
											<li><a href="start.html#control">Control</a></li>
											<li><a href="start.html#scene_handling">Scene handling</a></li>
37
38
											<li><a href="start.html#rendering">Audio rendering</a></li>
											<li><a href="start.html#reproduction">Audio reproduction</a></li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
39
											<li><a href="start.html#tracking">Tracking</a></li>
40
41
											<li><a href="start.html#simulation_recording">Simulation and recording</a></li>
											<li><a href="start.html#examples">Examples</a></li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
42
43
44
45
46
47
48
49
50
51
52
53
54
55
										</ul>
									</li>
									<li>
										<a href="help.html">Get help</a>
										<ul>
											<li><a href="help.html#faq">FAQ</a></li>
											<li><a href="help.html#issue_tracker">Issue tracker</a></li>
											<li><a href="help.html#community">Community</a></li>
											<li><a href="help.html#nosupport">No support</a></li>
										</ul>
									</li>
									<li>
										<a href="developers.html">Developers</a>
										<ul>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
56
57
58
59
											<li><a href="developers.html#api">C++ API</a></li>
											<li><a href="developers.html#dependencies">Dependencies</a></li>
											<li><a href="developers.html#configuration">Configuration</a></li>
											<li><a href="developers.html#build_guide">Build guide</a></li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
60
61
62
											<li><a href="developers.html#repositories">Repositories</a></li>
										</ul>
									</li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
63
64
65
66
67
68
69
70
									<li>
										<a href="research.html">Research</a>
										<ul>
											<li><a href="research.html#system">System papers</a></li>
											<li><a href="research.html#technology">Technology papers</a></li>
											<li><a href="research.html#applied">Applied papers</a></li>
										</ul>
									</li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
71
72
73
74
75
76
77
78
79
80
81
82
83
								</ul>
							</li>
							<li><a href="legal.html">Legal notice</a></li>
							<!--<li><a href="#" class="button special">Sign Up</a></li>-->
						</ul>
					</nav>
				</header>


			<!-- Main -->
				<div id="main" class="wrapper style1">
					<div class="container">
						<header class="major">
84
85
							<h2>Getting started</h2>
							<p>Auralization with Virtual Acoustics <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
86
87
							<span style="font-size: 0.6em">This content is available under <a href="http://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></span>
							</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
88
						</header>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
89
						
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
90
								<!-- Content -->
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
91
								
92
									<section id="preface">
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
93
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
94
										<h3>Preface</h3>
fpa's avatar
fpa committed
95
										<p>Virtual Acoustics is a powerful tool for the auralization of virtual acoustic scenes and the reproduction thereof. Getting started with VA includes three important steps
Dipl.-Ing. Jonas Stienen's avatar
Merging    
Dipl.-Ing. Jonas Stienen committed
96

97
										<p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
98
										<strong><ul><li>Configuring the application</li><li>Controlling the core</li><li>Setting up a scene</li></ul></strong>
99
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
100
										
fpa's avatar
fpa committed
101
										The overall design goal aimed at keeping things as simple as possible. However, certain circumstances do not allow further simplicity due to their complexity by nature. VA addresses professionals and is mainly used by scientists. Important features are never traded for convenience if the system's integrity is at stake. Hence, getting everything out of VA will require profound understanding of the technologies involved. It is designed to offer highest flexibility which comes at the price of a demanding configuration. At the beginning, configuring VA is not trivial especially if a loudspeaker-based audio reproduction shall be used. <br /><br />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
102
103
										
										The usage of VA can often be divided into two user groups
Dipl.-Ing. Jonas Stienen's avatar
Merging    
Dipl.-Ing. Jonas Stienen committed
104

fpa's avatar
fpa committed
105
										<strong><ul><li>those who seek for quick experiments with spatial audio and are happy with conventional playback over headphones</li>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
106
										<li>those who want to employ VA for a sophisticated loudspeaker setup for (multi modal) listening experiments and Virtual Reality applications</li></ul></strong>
107
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
108
										
109
										For the first group of users, there are some simple setups that will already suffice for most of the things you aspire. Such setups include, for example, a configuration for binaural audio rendering over a non-equalized off-the-shelf pair of headphones. Another configuration example contains a self-crafted interactive rendering application that exchanges pre-recorded or simulated FIR filters using Matlab or Python scripts for different purposes such as room acoustic simulations, building acoustics, A/B live switching tests to assess the influence of equalization. The configuration effort is minimal and works out of the box if you use the Redstart applications or start a VA command line server with the corresponding core configuration file. If you consider yourself as part of this group of users skip the configuration part and <a href="#examples">have a look at the examples</a>. Thereafter, read the <a href="#control"> control section</a> and the <a href="#scene_handling">scene handling section</a>. Additional examples are provided by the <a href="http://www.ita-toolbox.org/">ITA Toolbox</a> (see folder <...>\applications\VirtualAcoustics\VA).<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
110
										<br />
fpa's avatar
fpa committed
111
										If you are willing to dive deeper into the VA framework you are probably interested in how to adapt the software package for your purposes. The following sections will describe how you can set up VA for your goal from the very beginning.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
112
										</p>
113
114
									</section>
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
115
116
									<hr />
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
117
									
118
									<section id="configuration">
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
119
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
120
121
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
122
										<!--<a href="#" class="image fit"><img src="images/pic05.jpg" alt="" /></a>-->
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
123
										<h3>Virtual Acoustics configuration</h3>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
124
									
fpa's avatar
fpa committed
125
									<p>VA can be configured using a section-based key-value parameter collection which is passed on to the core instance during initialization. This is usually done by providing a path to a text-based INI file which will be referred to as <code>VACore.ini</code> but can be of arbitrary name. If you use the <code>VAServer</code> application you will work with this file only. If you only use the <code>Redstart</code> GUI application you will probably never use it. However, the INI file can be exported from a Redstart session in case you need it.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
126
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
127

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
128
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
129
										<h4>Basic configuration</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
130
										<h5>Paths</h5>
fpa's avatar
fpa committed
131
										<p>The <code>Paths</code> section allows for adding search paths to the core. If resources like head-related transfer functions (HRTFs), geometry files, or audio files, are required these search paths guarantee to locate the requested files. Relative paths are resolved from the execution folder where the VA server application is started from. When using the provided batch start scripts on Windows it is recommended to add <code>data</code> and <code>conf</code> folders.</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
132
133
134
135
136
137
138
139
140
141
142
143
144
										<p>
<pre><code>[Paths]

data = data
conf = conf

my_data = C:/Users/Me/Documents/AuralizationData
my_other_data = /home/me/auralization/input

</code></pre>
										</p>
										
										<h5>Files</h5>
fpa's avatar
fpa committed
145
										<p>In the <code>Files</code> section, you can name files that will be included as further configuration files. This is helpful when certain configuration sections must be <i>outsourced</i> to be reused efficiently. Outsourcing is especially convenient when switching between static sections like hardware descriptions for laboratories or setups, but can also be used for rendering and reproduction modules (see below). Avoid copying larger configuration sections that are  re-used frequently. Use different configuration files, instead.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
146
147
148
149
150
151
152
153
154
155
										<p>
<pre><code>[Files]

old_lab = VASetup.OldLab.Loudspeakers.ini
#new_lab = VASetup.NewLab.Loudspeakers.ini

</code></pre>
										</p>
										
										<h5>Macros</h5>
fpa's avatar
fpa committed
156
										<p>The <code>Macros</code> section is helpful to write tidy scripts. Use macros if it is not explicitly required to use a specific input file. For example, if any HRTF can be used for a receiver in the virtual scene the <code>DefaultHRIR</code> will point to the default HRTF data set, or head-related impulse response (HRIR) in time domain. Any defined macros will be replaced through a given value by the core.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
157
										Usage: "$(MyMacroName)/file.abc" -> "MyValue/file.abc"<br />
fpa's avatar
fpa committed
158
159
160
										Macros are substituted forwardly by key name order (use with care), and otherwise stay untouched: A = B; C = $(A) -> $(C) is B<br />
										The example macros provided below are a good practice set which should be present in a configuration file in order to keep the example scripts valid.<br />
										Macros are also very helpful if certain exported file prefixes are desired, e.g., to get better structured file names for input and output recordings.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
										<p>
<pre><code>[Macros]

DefaultHRIR = HRIR/ITA-Kunstkopf_HRIR_AP11_Pressure_Equalized_3x3_256.v17.ir.daff
HumanDir = Directivity/Singer.v17.ms.daff
Trumpet = Directivity/Trumpet1.v17.ms.daff

# Define some other macros (examples)
ProjectName = MyVirtualAcousticsProject

</code></pre>
										</p>
										
										
										<h5>Debug</h5>
fpa's avatar
fpa committed
176
177
										<p>The <code>Debug</code> section configures the initial behavior of the core as, for example, log level and input/output recording. If input and output recording is enabled the entire channel number of your physical or abstract device will be logged. For devices with a lot of digital inputs and outputs, the channel count may reach up to 256 channels, the maximum channel number as defined per WAV format. Additionally, the data is stored as PCM data at a resolution of 32 bit leading to high storage requirements. To avoid such excessive storage demands, only use this option if absolutely necessary. Otherwise it is recommended to only record the output channels which were set, for example, in the playback modules (see below).<br />
										In the following, some macros are used (see Macros section above).
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
178
179
180
										<p>
<pre><code>[Debug]

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
181
# Record device input and store to hard drive (will record every available input channel)
182
183
InputRecordEnabled = false
InputRecordFilePath = $(ProjectName)_in.wav
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
184

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
185
# Record device output and store to hard drive (will record every available output channel)
186
OutputRecordEnabled = false
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
187
188
189
190
191
192
193
OutputRecordFilePath = $(ProjectName)_out.wav

# Set log level: 0 = quiet; 1 = errors; 2 = warnings (default); 3 = info; 4 = verbose; 5 = trace;
LogLevel = 3

</code></pre>
										</p>
194
195
196
197
198



<h4>Calibration</h4>
<p>
fpa's avatar
fpa committed
199
To properly calibrate a rendering and reproduction system, every component in the chain has to be carefully configured. Hence the lack of being scaled by physical means, digital signals stored, for example, in a WAV file or in the buffers of the sound card, a reference point enabling a proper calibration was set. In VA, a digital value of 1.0 refers to 1 Pascal at a distance of 1 m per default. For example, a sine wave with peak value of \sqrt(2)) will retain 94 dB SPL at a distance of 1m. But this value can also be changed to <b>124 dB</b> if lower amplitudes are necessary (and a sample type conversion from float to integer is performed along the output chain). This makes it necessary to use a powerful amplifier facilitating the reproduction of small sample values. Setting the internal conversion value to 124 dB avoids clipping at high values (but introduces a higher noise floor). To do so, include the following section into the configuration (the clarification comment can be dropped):
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
</p>

<p>
<pre><code>[Calibration]

# The amplitude calibration mode either sets the internal conversion from
# sound pressure to an electrical or digital amplitude signal (audio stream)
# to 94dB (default) or to 124dB. The rendering modules will use this calibration
# mode to calculate from physical values to an amplitude that can be forwarded
# to the reproduction modules. If a reproduction module operates in calibrated
# mode, the resulting physical sound pressure at receiver location can be maintained.

DefaultAmplitudeCalibrationMode = 94dB

</code></pre>
</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
216
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
217
218
										
										<h4>Audio interface configuration</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
219
										<p>
fpa's avatar
fpa committed
220
										The audio interface controls the backend driver and the device. In the current version, for the <code>Driver</code> backend key, <code>ASIO</code> is supported on Windows only, whereas <code>Portaudio</code> is available on all platforms. By default, Portaudio with the default driver is used that usually produces audible sound without further ado. However, the block sizes are high and the update rates are not sufficient for real-time auralization using motion tracking. Therefore, dedicated hardware and small block sizes should be used - and ASIO is recommended for Windows platforms.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
221
222
223
224
225
226
227
228
229
230
231
232
										</p>
										
										<h5>ASIO example using ASIO4ALL v2</h5>
										<p>
										<a href="http://www.asio4all.de" target="_blank">ASIO4ALL</a> is a useful and well-implemented intermediate layer for audio I/O making it possible to use ASIO drivers for the internal hardware (and any other audio device available). It must be installed on the PC, first.
<pre><code>[Audio driver]

Driver = ASIO
Samplerate = 44100
Buffersize = AUTO
Device = ASIO4ALL v2
</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
233
Although it appears that the buffer size can be defined for ASIO devices, the ASIO backend will automatically detect the buffer size that has been configured by the driver when the <code>AUTO</code> value is set (recommended). Set the buffer size in the ASIO driver dialog of your physical device, instead. Make sure, that the sampling rates are matching.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
ASIO requires a device name to be defined by each driver host. Further common hardware device names are
</p>

								<div class="table-wrapper">
									<table class="alt">
										<thead>
											<tr>
												<th width="16%">Manufacturer</th>
												<th width="32%">Device</th>
												<th>ASIO device name</th>
											</tr>
										</thead>
										<tbody>
											<tr>
												<td><b>RME</b></td>
												<td><i>Hammerfall DSP</i></td>
												<td><code>ASIO Hammerfall DSP</code></td>
											</tr>
											<tr>
												<td><b>RME</b></td>
												<td><i>Fireface USB</i></td>
												<td><code>ASIO Fireface USB</code></td>
											</tr>
											<tr>
												<td><b>RME</b></td>
												<td><i>MADIFace USB</i></td>
												<td><code>ASIO MADIface USB</code></td>
											</tr>
											<tr>
												<td><b>Focusrite</b></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
264
												<td><i>2i2, 2i4, ...</i></td>
265
												<td><code>Focusrite USB 2.0 Audio Driver</code> or <code>Focusrite USB ASIO</code></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
266
267
268
269
270
271
272
273
											</tr>
											<tr>
												<td><b>M-Audio</b></td>
												<td><i>Fast Track Ultra</i></td>
												<td><code>M-Audio Fast Track Ultra ASIO</code></td>
											</tr>
											<tr>
												<td><b>Steinberg</b></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
274
												<td><i>6UR22 MK2</i></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
275
276
												<td><code>Yamaha Steinberg USB ASIO</code></td>
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
277
278
279
280
281
											<tr>
												<td><b>Realtek</b></td>
												<td><i>Realtek Audio HD</i></td>
												<td><code>Realtek ASIO</code></td>
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
282
283
284
											<tr>
												<td><b>Zoom</b></td>
												<td><i>H6</i></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
285
286
287
288
289
290
												<td><code>ZOOM H and F Series ASIO</code></td>
											</tr>
											<tr>
												<td><b>ASIO4ALL</b></td>
												<td><i>any windows device</i></td>
												<td><code>ASIO4ALL v2</code></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
291
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
292
293
294
295
296
297
											<tr>
												<td><b>Reaper (x64)</b></td>
												<td><i>any Reaper device</i></td>
												<td><code>ReaRoute ASIO (x64)<br /></code></td>
											</tr>
											
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
298
299
300
301
302
303
304
305
306
307
											
										</tbody>
										<tfoot>
											<tr>
												<td colspan="3">Table 1: Common ASIO device driver host names</td>
											</tr>
										</tfoot>
									</table>
								</div>
<p>
fpa's avatar
fpa committed
308
If you do not have any latency requirements you can also use <code>Portaudio</code> under Windows and other platforms. The specific device names of Portaudio interfaces can be detected, for example, using the VLC player or with Audacity. But the <code>default</code> device is recommended simply because it will pick the audio device that is also registered as the default device of your system. This is, what most people need anyway, and the system tools can be used to change the output device.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
309
310
311
312
313
314
315
316
317
If the <code>Buffersize</code> is unkown, at least the native buffer size of the audio device should be used (which is most likely <code>1024</code> for on-board chips). Otherwise, timing will behave oddly which has a negative side effect on the rendering.
<pre><code>[Audio driver]

Driver = Portaudio
Samplerate = 44100
Buffersize = 1024
Device = default
</code></pre>
</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
318

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
319
320
										
										<h4>Audio hardware configuration</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
321
										
fpa's avatar
fpa committed
322
323
324
										<p>The <code>Setup</code> section describes the hardware environment in detail. It might seem a bit over the top but the complex definition of hardware groups with logical and physical layers eases re-using of physical devices for special setups and also allows for multiple assignments - similar to the RME matrix concept of TotalMix, except that volume control and mute toggling can be manipulated in real-time using the VA interface instead of the ASIO control panel GUI.<br />
										The hardware configuration can be separated into inputs and outputs, but they are basically handled in the same manner. More importantly, the setup can be devided into <strong>devices of specialized types</strong> and <strong>groups that combine devices</strong>. Often, this concept is unnecessary and appears cumbersome, but there are situations where this level of complexity is required.<br />
										A <strong>device</strong> is a physical emmitter (<code>OutputDevice</code>) or transducer (<code>InputDevice</code>) with a fixed number of channels and assignment using (arbitrary but unique) channel indices. A broadband loudspeaker with one line input is a typical representative of the single channel <code>LS</code> type <code>OutputDevice</code>that has a fixed pose in space. A pair of headphones is assigned the type <code>HP</code> and usually has two channels, but no fixed pose in space.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
325
326
										So far, there is only an input device type called <code>MIC</code> that has a single channel.
										<br /><br />
fpa's avatar
fpa committed
327
328
										Physical devices can not directly be used for a playback in VA. A reproduction module can rather be connected with one or many <code>Outputs</code> - logical groups of <code>OutputDevices</code>.<br />
										Again, for headphones this seems useless because a headphone device will be represented by a virtual group of only one device. However, for loudspeaker setups this makes sense as, for example, a setup of 7 loudspeakers for spatial reproduction may be used by different groups which combine only 5, 4, 3, or 2 of the available loudspeakers to form an output group. In this case, only the loudspeaker identifiers are required and channels and positions are made available by the physical device description. Following this strategy, repositioning of loudspeakers and re-assignment of channel indices is less error prone due to its organization in one configuration section, only.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
329
330
331
332
										</p>
										
										<h5>Headphone setup example</h5>
										<p>
fpa's avatar
fpa committed
333
										Let us assume you have a pair of Sennheiser HD 650 headphones at your disposal and you want to use it for binaural rendering and reproduction. This is the most common application of VA and will result in the following configuration:
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
334
335
336
337
338
339
340
<pre><code>[Setup]

[OutputDevice:SennheiserHD650]
Type = HP
Description = Sennheiser HD 650 headphone hardware device
Channels = 1,2

341
[Output:DesktopHP]
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
342
343
344
345
Description = Desktop user with headphones
Devices = SennheiserHD650

</code></pre>
fpa's avatar
fpa committed
346
If you want to use another output jack for some reason change your channels accordingly, say to <code>3,4</code>.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
347
348

										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
349
										
350
351
										<h5>Loudspeaker setup example</h5>
										<p>
fpa's avatar
fpa committed
352
										Let us assume you have a square-shaped loudspeaker setup of Neumann KH120 at your disposal. You want to use it for binaural rendering and reproduction. This is the a common application of VA for a dynamic listening experiment in a hearing booth. For this scenario, the configuration file may like this:
353
354
355
356
<pre><code>[Setup]

[OutputDevice:NeumannKH120_FL]
Type = LS
fpa's avatar
fpa committed
357
Description = Neumann KH 120 in front left corner of square
358
359
360
361
Channels = 1

[OutputDevice:NeumannKH120_FR]
Type = LS
fpa's avatar
fpa committed
362
Description = Neumann KH 120 in front right corner of square
363
364
365
366
Channels = 2

[OutputDevice:NeumannKH120_RR]
Type = LS
fpa's avatar
fpa committed
367
Description = Neumann KH 120 in rear right corner of square
368
369
370
371
Channels = 3

[OutputDevice:NeumannKH120_RL]
Type = LS
fpa's avatar
fpa committed
372
Description = Neumann KH 120 in rear left corner of square
373
374
375
376
377
378
379
Channels = 4

[Output:HearingBoothLabLS]
Description = Hearing booth laboratory loudspeaker setup
Devices = NeumannKH120_FL, NeumannKH120_FR, NeumannKH120_RR, NeumannKH120_RL

</code></pre>
fpa's avatar
fpa committed
380
Note: The order of devices in the output group is irrelevant for the final result. Each LS will receive the corresponding signal on the channel of the device.
381
382
383
384
385
386

										</p>
										
										
										<h5>Microphone setup example</h5>
										<p>
387
										The audio input configuration is similar to the output configuration but is not yet fully included in VA. If you want to use input channels as signal sources for a virtual sound source assign the provided unmanaged signals called <code>audioinput1, audioinput2, ... </code>. The number refers to the input channel index beginning with 1 and you can get the signals by using the getters <code>GetSignalSourceInfos</code> or <code>GetSignalSourceIDs</code>.
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
<pre><code>[Setup]

[InputDevice:NeumannTLM170]
Type = MIC
Description = Neumann TLM 170
Channels = 1

[Input:BodyMic]
Description = Hearing booth talk back microphone
Devices = NeumannTLM170

</code></pre>

										</p>
										
403
404
405
406
407
408
409
410
411
412
413
414
<h4>Homogeneous medium</h4>
<p>
To override default values concerning the homogeneous medium that is provided by VA, include the following section and modify the values to your needs (the default values are shown here).
</p>
<p>									
<pre><code>[HomogeneousMedium]

DefaultSoundSpeed = 344.0 # m/s
DefaultStaticPressure = 101125.0 # [Pa]
DefaultTemperature = 20.0 # [Degree centigrade]
DefaultRelativeHumidity = 20.0 # [Percent]
DefaultShiftSpeed = 0.0, 0.0, 0.0 # 3D vector in m/s</code></pre>
415
416
										
										
417
418
419
420
421
422
<h4 id="configuration_rendering">Rendering module configuration</h4>
<p>
To instantiate a rendering module, a section with a <code>Renderer:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this rendering instance. If you want to change parameters during execution this identifier is required to call the instance. Although all renderers require some obligatory definitions, a detailed description is necessary for the specific parameter set. For typical renderers, some examples are given below.
</p>
<h5>Required rendering module parameters</h5>
<p>
423
424
<pre><code>Class = RENDERING_CLASS
Reproductions = REPRODUCTION_INSTANCE(S)</pre></code>
425
426
The rendering class refers to the type of renderer which can be taken from the tables in the <a href="overview.html#rendering">overview</a> section.<br />
The section <code>Reproductions</code> describes how to configure connections to reproduction modules. At least one reproduction module has to be defined but the rendering stream can also be connected to multiple reproductions of same or different type (e.g., talkthrough, equalized headphones and cross-talk cancellation). The only restriction is that the rendering output channel number has to match the reproduction module's input channel number. This prevents connecting a two-channel binaural renderer with, for example, an Ambisonics reproduction which would take at least 4 channels.
427
428
429
430

										</p>
										<h5>Optional rendering module parameters</h5>
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
431
<pre><code>Description = Some informative description of this rendering module instance
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
432
433
434
Enabled = true
OutputDetectorEnabled = false
RecordOutputEnabled = false
435
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
436
</pre></code>
437
Rendering modules can be <i>enabled and disabled</i> to speed up setup changes without copying & pasting larger parts of a configuration section, as especially reproduction modules can only be instantiated if the sound card provides enough channels. This makes testing on a desktop PC and switching to a laboratory environment easier.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
438
<br />
439
For rendering modules, only the <i>output</i> can be observed. A stream detector for the output can be activated that will produce level meter values, for example, for a GUI widget. The output of the active listener can also be recorded and exported as a WAV file. Recording starts with initialization and is exported to the hard disc drive after finalization impliciting that data is kept in the RAM. If a high channel number is required and/or long recording sessions are planned it is recommended to route the output through a DAW, instead, i.e. with ASIO re-routing software devices like Reapers ReaRoute ASIO driver. To include a more versatile output file name (macros are allowed).
440
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
441

442
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
443
										<h5>Binaural free field renderer (class <code>BinauralFreeField</code>) example</h5>
444
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
445
										<p>
446
										This example with all available key/value configuration pairs is include in the default <code>VACore.ini</code> settings which is generated from the repository's <code>VACore.ini.proto</code> (by CMake). It requires a reproduction called <code>MyTalkthroughHeadphones</code>, shown further below.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
447
448
449
450
451
452
453
454
455
456
<pre><code>[Renderer:MyBinauralFreeField]
Class = BinauralFreeField
Enabled = true
Reproductions = MyTalkthroughHeadphones
HRIRFilterLength = 256
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.1
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
457
458
MotionModelLogInputReceiver = false
MotionModelLogEstimatedOutputReceiver = false
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
459
460
461
SwitchingAlgorithm = linear
OutputDetectorEnabled = false
RecordOutputEnabled = false
462
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav</pre></code>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
463

464
										A more detailed explanation of the motion model and further parameters are provided in the <a href="documentation.html">documentation</a> specifying how the rendering works.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
465
466
										</p>
										
467
468
469
										<h5>VBAP free field renderer (class <code>VBAPFreeField</code>) example</h5>
										
										<p>
470
										Requires <code>Output</code> (3-d positions of a loudspeaker setup) to render channel-based audio. Otherwise, it works similar to other free field renderers.
471
472
473
474
475
476
477
478
479
480
<pre><code>[Renderer:MyVBAPFreefield]
Class = VBAPFreeField
Enabled = true
Output = VRLab_Horizontal_LS
Reproductions = MixdownHeadphones</pre></code>
										</p>
										
										<h5>Ambisonics free field renderer (class <code>AmbisonicsFreeField</code>) example</h5>
										
										<p>
481
										Similar to binaural free field renderer, but evaluates receiver directions based on a decomposition into spherical harmonics with a specific order (<code>TruncationOrder</code>). It requires a reproduction called <code>MyAmbisonicsDecoder</code> which is shown further below.
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
<pre><code>[Renderer:MyAmbisonicsFreeField]
Class = AmbisonicsFreeField
Enabled = true
Reproductions = MyAmbisonicsDecoder
TruncationOrder = 3
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.1
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceiver = false
MotionModelLogEstimatedOutputReceiver = false
SwitchingAlgorithm = linear
OutputDetectorEnabled = false
RecordOutputEnabled = false
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav</pre></code>

										</p>
										
										<h5>Ambient mixing renderer (class <code>AmbientMixer</code>) example</h5>
										<p>
503
										The ambient mixer takes the value of the key <code>OutputGroup</code> and accordingly sets the channel count for playback as subsequent reproduction modules require matching channels. However, an arbitrary number of reproduction modules can be specified, as shown in the following example.
504
505
506
507
508
509
510
511
512
513
<pre><code>[Renderer:MyAmbientMixer]
Class = AmbientMixer
Description = Low-cost renderer to make sound audible without spatializations
Enabled = true
OutputGroup = MyDesktopHP
Reproductions = MyDesktopHP, MySubwooferArray</pre></code>
										</p>
										
										<h5>Binaural artificial room acoustics renderer (class <code>BinauralArtificialReverb</code>) example</h5>
										<p>
fpa's avatar
fpa committed
514
										Values and angles are specified in SI units (e.g., seconds, meters, watts, etc.) and angles, respectively. The reverberation time may exceed the reverberation filter length (divided by the sampling rate) resulting in a cropped impulse response. This renderer requires and uses the sound receiver HRIR for spatialization and applies a sound power correction to match with direct sound energy if used together with the binaural free field renderer.
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
<pre><code>[Renderer:MyBinauralArtificialRoom]
Class = BinauralArtificialReverb
Description = Low-cost per receiver artificial reverberation effect
Enabled = true
Reproductions = MyTalkthroughHeadphones
ReverberationTime = 0.71
RoomVolume = 200
RoomSurfaceArea = 88
MaxReverbFilterLengthSamples = 88200
PositionThreshold = 1.0
AngleThresholdDegree = 30
SoundPowerCorrectionFactor = 0.05
TimeSlotResolution = 0.005
MaxReflectionDensity = 12000.0
ScatteringCoefficient = 0.1</pre></code>
										</p>
										
532
										<h5>Binaural room acoustics renderer (class <code>BinauralRoomAcoustics</code>) example</h5>
533
										<p>
fpa's avatar
fpa committed
534
										Requires the Room Acoustics for Virtual ENvironments (RAVEN) software module (see <a href="research.html">Research section</a>) or other room acoustics simulation backends. Note that the reverberation time may exceed the reverberation filter length (divided by the sampling rate) with the consequence that the generated impulse response will be cropped. This renderer requires and uses the specified sound receiver HRIR data set for spatialization and applies a sound power correction to match with direct sound energy if combined with binaural free field renderer.
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
<pre><code>[Renderer:MyBinauralRoomAcoustics]
Class = BinauralRoomAcoustics
Enabled = true
Description = Renderer with room acoustics simulation backend (RAVEN) for a source-receiver-pair with geometry-aware propagation
Reproductions = MyTalkthroughHeadphones
# Setup options: Local, Remote, Hybrid
Setup = Local
ServerIP = PC-SEACEN
HybridLocalTasks = DS
HybridRemoteTasks = ER_IS, DD_RT
RavenDataBasePath = $(raven_data)
# Task processing (Timeout = with desired update rate, for resource efficient processing; EventSync = process on request (for sporadic updates); Continuous = update as often as possible, for standalone server)
TaskProcessing = Timeout
# Desired update rates in Hz, may lead to resource issues
UpdateRateDS = 12.0
UpdateRateER = 6.0
UpdateRateDD = 1.0
MaxReverbFilterLengthSamples = 88200
DirectSoundPowerCorrectionFactor = 0.3</pre></code>
										</p>
										
										
										<h5>Prototype free field renderer (class <code>PrototypeFreeField</code>) example</h5>
										
										<p>
fpa's avatar
fpa committed
560
										Similar to binaural free field renderer with the capability of handling multi-channel receiver directivities. This renderer can, for example, be used for recording the output of microphone array simulations.
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
<pre><code>[Renderer:MyPrototypeFreeField]
Class = PrototypeFreeField
Enabled = true
Reproductions = MyTalkthroughHeadphones
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.2
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceivers = false
MotionModelLogEstimatedOutputReceivers = false
SwitchingAlgorithm = linear</pre></code>
										</p>
										
										<h5>Prototype generic path renderer (class <code>PrototypeGenericPath</code>) example</h5>
										
fpa's avatar
fpa committed
577
										<p>Channel count and length can be specified arbitrarily but is limited by the computational power available. Filtering is done individually for each source-receiver pair.
578
579
580
581
582
583
584
585
586
587
588
589
<pre><code>[Renderer:MyPrototypeGenericPath]
Class = PrototypeGenericPath
Enabled = true
Reproductions = MyTalkthroughHeadphones
NumChannels = 2
IRFilterLengthSamples = 88200
IRFilterDelaySamples = 0
OutputMonitoring = true</pre></code>
										</p>
										
										<h5>Binaural air traffic noise renderer (class <code>BinauralAirTrafficNoise</code>) example</h5>
										
fpa's avatar
fpa committed
590
										<p>Filtering is done individually for each source-receiver pair. Involved filters the simulation of propagation paths can also be exchanged by the user for prototyping (requires a modification of simulation flags in the configuration file).
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
<pre><code>
[Renderer:MyAirTrafficNoiseRenderer]
Class = BinauralAirTrafficNoise
Enabled = true
Reproductions = MyTalkthroughHeadphones
MotionModelNumHistoryKeys = 1000
MotionModelWindowSize = 2
MotionModelWindowDelay = 1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceivers = false
MotionModelLogEstimatedOutputReceivers = false
GroundPlanePosition = 0.0
PropagationDelayExternalSimulation = false
GroundReflectionExternalSimulation = false
DirectivityExternalSimulation = false
AirAbsorptionExternalSimulation = false
SpreadingLossExternalSimulation = false
TemporalVariationsExternalSimulation = false
SwitchingAlgorithm = cubicspline</pre></code>
										</p>
										
										<h5>Dummy renderer (class <code>PrototypeDummy</code>) example</h5>
										
fpa's avatar
fpa committed
615
										<p>Useful for a quick configuration of your own prototype renderer.
616
617
618
619
620
621
622
623
624
625
<pre><code>[Renderer:MyDummyRenderer]
class = PrototypeDummy
Description = Dummy renderer for testing, benchmarking and building upon
Enabled = true
OutputGroup = MyDesktopHP
Reproductions = MyTalkthroughHeadphones</pre></code>
										</p>
								

				
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
626
										<h5>Other rendering module examples</h5>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
627
										<p>
fpa's avatar
fpa committed
628
										Every specific rendering module has its own specific set of parameters. The discussion of every functional detail is out of scope of this introduction. As all configurations are parsed in the constructor of the respective module, their functionality can sometimes only be fully understood by investigating the source code. For facilitation, the Redstart GUI application includes dialogs to create and interact with those renderers, additionally offering information when hovering over the GUI elements.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
629
630
631
										</p>
										
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
632
										
633
634
635
636
<h4 id="configuration_reproduction">Reproduction module configuration</h4>
<p>
To instantiate a reproduction module, a section with a <code>Reproduction:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this reproduction instance. If you want to change parameters during execution, this identifier is required to call the instance. All reproduction modules require some obligatory definitions but for every specific parameter set, a detailed description is necessary. For typical reproduction modules, some examples are given below.
</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
637
638
639
640
641
										
										<h5>Required reproduction module parameters</h5>
										<p>
<pre><code>Class = REPRODUCTION_CLASS
Outputs = OUTPUT_GROUP(S)</pre></code>
fpa's avatar
fpa committed
642
643
The reproduction class refers to the type of reproduction as provided in the section <a href="overview.html#reproduction">overview</a>.<br />
The parameter <code>Outputs</code> describes the connections to logical output groups that forward audio based on the configured channels. At least one output group has to be defined but the reproduction stream can also be connected to multiple outputs of same or different type (e.g., different pairs of headphones). The only restriction is that the reproduction channel number has to match with the channel count of the output group(s).
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
644
645
646
647
648
649
650
651

										</p>
										<h5>Optional reproduction module parameters</h5>
										<p>
<pre><code>Description = Some informative description of this reproduction module instance
Enabled = true
InputDetectorEnabled = false
RecordInputEnabled = false
652
RecordInputFilePath = MyReproInput_filename_may_including_$(ProjectName)_macro.wav
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
653
654
OutputDetectorEnabled = false
RecordOutputEnabled = false
655
RecordOutputFilePath = MyReproOutput_filename_may_including_$(ProjectName)_macro.wav
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
656
</pre></code>
fpa's avatar
fpa committed
657
Reproduction modules can be <i>enabled and disabled</i> to speed up setup changes without copy & pasting larger parts of a configuration section as especially output groups can only be instantiated if the sound card provides enough channels. This makes testing on a desktop and switching to a lab environment easier.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
658
<br />
fpa's avatar
fpa committed
659
For reproduction modules, the <i>input and output</i> can be observed. A stream detector on input and output can be activated that will produce level meter values, to be used in a GUI widget or so. The input of a reproduction module may include several superposed rendering streams (in constrast to the rendering output), for example, for direct sound and reverberant sound. The output of a reproduction can also be recorded and exported to a WAV file. The recording starts at initialization and is exported to hard drive after finalization implicating that data is kept in the RAM. If a lot of channel numbers are required and/or long recording sessions are planned it is recommended to route the output through a DAW using, for example, ASIO re-routing software devices like Reapers ReaRoute ASIO driver. Macros are useful to include a more versatile output file name.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
660
661
662
										</p>
										
										
663
										<h5>Talkthrough reproduction (class <code>Talkthrough</code>) example</h5>										
fpa's avatar
fpa committed
664
										<p>The following example with all available key/value configuration pairs is taken from the default <code>VACore.ini</code> settings which is generated from the repository's <code>VACore.ini.proto</code> (by CMake). It requires an output called <code>MyDesktopHP</code>.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
665
666
667
<pre><code>[Reproduction:MyTalkthroughHeadphones]
Class = Talkthrough
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
668
Description = Generic talkthrough to output group
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
669
670
671
672
673
674
675
676
677
678
Outputs = MyDesktopHP
InputDetectorEnabled = false
OutputDetectorEnabled = false
RecordInputEnabled = false
RecordInputFilePath = $(ProjectName)_Reproduction_MyTalkthroughHeadphones_Input.wav
RecordOutputEnabled = false
RecordOutputFilePath = $(ProjectName)_Reproduction_MyTalkthroughHeadphones_Output.wav</pre></code>

										</p>
										
fpa's avatar
fpa committed
679
<h5>Low-frequency / subwoofer mixing reproduction (class <code>LowFrequencyMixer</code>) example</h5>										
680
681
682
<pre><code>[Reproduction:MySubwooferMixer]
Class = LowFrequencyMixer 
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
683
Description = Generic low frequency (subwoofer) loudspeaker mixer
684
685
686
687
Outputs = Cave_SW
MixingChannels = ALL # Can also be a single channel, e.g. zero order of Ambisonics stream</code></pre></p>

<h5>Equalized headphones reproduction (class <code>Headphones</code>) example</h5>
fpa's avatar
fpa committed
688
<p>Two-channel equalization using FIR filtering based on post-processed inverse headphone impulse responses measured through in-ear microphones.
689
690
<pre><code>[Reproduction:MyHD600]
Class = Headphones
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
691
Description = Equalized Sennheiser HD600 headphones
692
693
694
695
696
697
698
699
700
Enabled = true
# Headphone impulse response inverse file path (can be normalized, but gain must then be applied for calibration)
HpIRInvFile = HD600_all_eq_128_stereo.wav
HpIRInvFilterLength = 22050 # optional, can also be obtained from IR filter length
# Headphone impulse response inverse gain for calibration ( HpIR * HpIRInv == 0dB )
HpIRInvCalibrationGainDecibel = 0.1
Outputs = MyHD600HP</code></pre></p>
										
<h5>Multi-channel cross-talk cancellation reproduction (class <code>NCTC</code>) example</h5>
fpa's avatar
fpa committed
701
<p>Requires an output called <code>MyDesktopLS</code>. In case of a dynamic NCTC reproduction, only one receiver can be tracked (indicated by <code>TrackedListenerID</code> which is orientated and located based on a </i>real-world pose</i>). <code>DelaySamples</code> shifts the final CTC filters to obtain causal filters. The amount of the delay has to be set reasonably regarding <code>CTCFilterLength</code> (e.g., apply a shift of half the filter length). 
702
703
704
<pre><code>[Reproduction:MyNCTC]
Class = NCTC
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
705
Description = Crosstalk cancellation for N loudspeaker
706
707
708
709
Outputs = MyDesktopLS
TrackedListenerID = 1
# algorithm: reg|...
Algorithm = reg
710
RegularizationBeta = 0.001
711
DelaySamples = 2048
712
713
CrossTalkCancellationFactor = 1.0
WaveIncidenceAngleCompensationFactor = 1.0
714
715
716
717
718
UseTrackedListenerHRIR = false
CTCDefaultHRIR = $(DefaultHRIR)
Optimization = OPTIMIZATION_NONE</pre></code></p>
										
<h5>Higher-order Ambisonics decoding (class <code>HOA</code>) example</h5>
fpa's avatar
fpa committed
719
<p>Creates a decoding matrix based on a given output configuration, but can only be used for one output.
720
721
722
<pre><code>[Reproduction:MyAmbisonics]
Class = HOA
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
723
Description = Higher-Order Ambisonics
724
725
TruncationOrder = 3
Algorithm = HOA
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
726
727
Outputs = VRLab_Horizontal_LS
ReproductionCenterPos = AUTO # or x,y,z</pre></code></p>
728
729
730


<h5>Ambisonics binaural mixdown (class <code>AmbisonicsBinauralMixdown</code>) example</h5>
fpa's avatar
fpa committed
731
<p>Encodes the individual orientations of loudspeakers in a loudspeaker setup using binaural technology based on the <code>VirtualOutput</code> group. It can also be used for a virtual Ambisonics downmix with ideal spatial sampling layout.
732
733
734
<pre><code>[Reproduction:AmbisonicsBinauralMixdown]
Class = AmbisonicsBinauralMixdown
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
735
Description = Binaural mixdown of virtual loudspeaker setup using HRIR techniques
736
737
738
739
740
741
TruncationOrder = 3
Outputs = MyDesktopHP
VirtualOutput = MyDesktopLS
TrackedListenerID = 1
HRIRFilterLength = 128</pre></code></p>

Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
742
743
744
										
										<h5>Other reproduction module examples</h5>
										<p>
fpa's avatar
fpa committed
745
										Every specific reproduction module has its own specific set of parameters. The discussion of every functional detail is out of scope of this introduction. As all configurations are parsed in the constructor of the respective module, their functionality can sometimes only be fully understood by investigating the source code. For facilitation, the Redstart GUI application includes dialogs to create and interact with those renderers, additionally offering information when hovering over the GUI elements.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
746
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
747
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
748
749
									</section>
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
750
									<hr />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
751
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
752
									<section id="control">
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
753
754
755
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
756
										<h3>Controlling a Virtual Acoustics instance</h3>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
757
										<p>
fpa's avatar
fpa committed
758
										Once your VA application is running as configured, you eventually want to create a virtual scene and modify its entities. Scene control is possible via scripts and tracking devices (e.g, NaturalPoint's OptiTrack). The VA interface provides a list of methods which lets you trigger updates and control settings.<br />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
759
760
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
761
										<h4>Control VA using Matlab</h4>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
762
										<p>
fpa's avatar
fpa committed
763
764
										The most common way to control VA for prototyping, testing, and in the scope of listening experiments is by using <b>MathWorks' Matlab</b>. VA provides a Matlab binding and a convenience class called <code>itaVA</code>. Once initialized, the class object can be connected to the VA server application over a TCP/IP network connection (or the local network port), as already described in the <a href="overview.html">overview section on controlling VA</a>.<br />
										You can find the <code>itaVA.m</code> Matlab class along with the required files for communication with VA in the <a href="download.html">VA package under the <code>matlab</code> folder</a>. In case you are building and deploying <code>VAMatlab</code> on your own (for your platform), or if it is missing, look out for <code>build_itaVA*.m</code> scripts that will generate the convenience class around the <code>VAMatlab</code> executable. Adding this folder to the Matlab path list, will enable permanent access from the console, independently of the current working directory.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
765
										<br />
fpa's avatar
fpa committed
766
										To get started, inspect the example files and use Matlab's bash completion on an instance of the <code>itaVA</code> class to receive self explanatory functions, i.e., when executing
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
767
										<pre><code>va = itaVA</code></pre>
fpa's avatar
fpa committed
768
										The list of available methods is sorted by getter and setter nomenclature (<code>va.get_*</code> and <code>va.set_*</code>), followed by the entity (<code>sound_receiver</code>, <code>sound_source</code>, <code>sound_portal</code>), and the actual action. To create an entity, directivities and more, use the <code>va.create_*</code> methods.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
769
770
771
772
										<br />
										<br />
										
										<blockquote>
fpa's avatar
fpa committed
773
										Note: All example calls to control VA are shown in <b>Matlab code style</b>. The naming convention in other scripting languages, however, is very similar. C++ and C# methods use capitalized words without underscores.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
774
										</blockquote>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
775
776
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
777
										<h4>Control VA using Python</h4>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
778
										<p>
fpa's avatar
fpa committed
779
										A Python VA module is available facilitating network access. It can be installed to be executed from anywhere, or it can be copied to and executed from a local folder. To obtain the package and example scripts, <a href="download.html">download a package that includes the Python binding</a> (only available for Python 3.6 and recent compilers).
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
780
781
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
782
										<h4>Control VA using Unity</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
783
										<p>
fpa's avatar
fpa committed
784
										Unity, a 3D and scripting development environment for games and Virtual Reality applications, allows a more intuitive and playful way to use VA. The <code>VAUnity</code> C# scripts extend a Unity <code>GameObject</code> and communicates properties to a VA server. Therefore, a C# VA binding, which comes with the <a href="download.html">binary packages in the download section</a>, is required. No knowledge of a scripting or programming language, only a copy of Unity is required using this method. How to use VA and Unity is described in the <a href="https://git.rwth-aachen.de/ita/VAUnity"> README file of the project repository</a>.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
785
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
786
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
787
788
										<p>&nbsp;</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
789
790
791
792
793
794
795
796
797
798
										<h4>Global gain and muting</h4>
										<p>
										To control the global input gains (sound card software input channels), use
										<pre><code>va.set_input_gain( 1.0 ) # for values between 0 and 1</code></pre>
										</p>
										<p>
										To mute the input, use
										<pre><code>va.set_input_muted( true ) # or false to unmute</code></pre>
										</p>
										<p>
fpa's avatar
fpa committed
799
										The same is true for the global output gain (sound card software output channels)
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
800
801
802
803
804
										<pre><code>va.set_output_gain( 1.0 )<br />va.set_output_muted( true ) # or false to unmute</code></pre>
										</p>
										
										<h4>Global auralization mode</h4>
										<p>
fpa's avatar
fpa committed
805
										The auralization mode is combined in the renderers by a logical AND combination of global auralization mode, sound receiver auralization mode and sound source auralization mode. The deactivation of an acoustic phenomenon such as, for example, the spreading loss, will affect all rendered sound paths.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
806
807
808
809
810
811
812
813
814
										
										<pre><code>va.set_global_auralization_mode( '-DS' ) # ... to disable direct sound<br />va.set_global_auralization_mode( '+SL' ) # ... to enable spreading loss, e.g. 1/r distance law<br /></code></pre>
										
										Find the appropriate identifier for every auralization mode in the <a href="overview.html">overview table</a>.
										</p>
										
										<h4>Log level</h4>
										<p>The VA log level at server side can be changed using										
										<pre><code>va.set_log_level( 3 ) # 0 = quiet; 1 = errors; 2 = warnings (default); 3 = info; 4 = verbose; 5 = trace;</code></pre>
fpa's avatar
fpa committed
815
										Increasing the log level is potentially helpful to detect problems if the current log level is not high enough to throw an indicative warning message.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
816
817
818
819
										</p>
										
										<h4>Search paths</h4>
										<p>At runtime, search paths can be added to the VA server using							
fpa's avatar
fpa committed
820
821
										<pre><code>va.add_search_path( './your/data/' )</code></pre>
										Note, that the search path has to be available at server side if you are not running VA on the same machine. Wherever possible, add search paths and use file names only. Never use absolute paths for input files. If your server is not running on the same machine, consider adding search paths via the configuration at startup.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
822
823
										</p>
										
fpa's avatar
fpa committed
824
										<h4>Query registered modules</h4>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
825
826
										<p>To retrieve information on the available modules, use				
										<pre><code>modules = va.get_modules()</code></pre>
fpa's avatar
fpa committed
827
										This method will return any registered VA module, including all renderer and reproduction modules as well as the core itself.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
828
829
830
831
										</p>
										<p>
										All modules can be called using 			
										<pre><code>out_args = va.call_module( 'module_id', in_args )</code></pre>
fpa's avatar
fpa committed
832
833
834
835
836
										where <code>in_args</code> and <code>out_arg</code> are structs with specific fields which depend on the module you are calling.
										Usually, a struct field with the name <code>help</code> or <code>info</code> returns useful information on how to work with the respective module:
										</p>
										<p>										
										<pre><code>va.call_module( 'module_id', struct('help',true) )</code></pre>										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
837
838
839
										</p>
										<p>
										To work with renderers, use
840
										<pre><code>renderers = va.get_rendering_modules()<br />params = va.get_renderer_parameters( 'renderer_id' )<br />va.set_renderer_parameters( 'renderer_id', params )</code></pre>
fpa's avatar
fpa committed
841
										Again, all parameters are returned as structs. More information on a parameter set can be obtained using structs containing the field <code>help</code> or <code>info</code>. It is good practice to use the parameter getter and inspect the key/value pairs before modifying and re-setting the module with the new parameters.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
842
843
										</p>
										<p>
fpa's avatar
fpa committed
844
										For reproduction modules, use
845
										<pre><code>reproductions = va.get_reproduction_modules()<br />params = va.get_reproduction_parameters( 'reproduction_id' )<br />va.set_reproduction_parameters( 'reproduction_id', params )</code></pre>
fpa's avatar
fpa committed
846
										Querying and re-setting parameters works in the same way as described for rendering and reproduction modules.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
847
848
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
849
									</section>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
850
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
851
									<hr />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
852
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
853
									<section id="scene_handling">
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
854
855
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
fpa's avatar
fpa committed
856
										<h3>How to create and modify a scene in Virtual Acoustics</h3>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
857
										<p>
fpa's avatar
fpa committed
858
859
										In VA, everything that is not static is considered part of a dynamic <i>scene</i>. All sound sources, sound portals, sound receivers, underlying geometry and source/receiver directivities are potentially dynamic and therefore are stored and accessed using a history concept. They can be modified, however, during lifetime. Renderers are picking up modifications and react upon the new state, for example, when a sound source is moved or a sound receiver is rotated.<br />
										Updates are triggered asynchronously by the user or by another application and can also be synchronized ensuring that all signals are started or stopped within one audio frame.										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
860
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
861
862
										
										<h4>Sound sources</h4>
fpa's avatar
fpa committed
863
										<p>Sound sources can be created by using										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
864
865
866
										<pre><code>S = va.create_sound_source()</code></pre>
										</p>
										<p>
fpa's avatar
fpa committed
867
										or created and optionally assigned a name										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
868
										<pre><code>S = va.create_sound_source( 'Car' )</code></pre>
fpa's avatar
fpa committed
869
										<code>S</code> will contain a unique numerical identifier which is required to modify the sound source.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
870
871
										</p>
										
fpa's avatar
fpa committed
872
873
874
																														<blockquote>A sound source (as well as a sound receiver) can only be auralized if it has been placed somewhere in 3D space. Otherwise it remains in an invalid state.</blockquote>
																			
										<p>Specify a position as a three-dimensional vector ...
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
875
										<pre><code>va.set_sound_source_position( S, [ x y z ] )</code></pre>
876
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
877
878
										</p>
										
fpa's avatar
fpa committed
879
880
										<p>... and an orientation using a four-dimensional quaternion
										<pre><code>va.set_sound_source_orientation( S, [ a b c d ] )</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
881
										</p>
fpa's avatar
fpa committed
882
										following the quaternion coefficient order <code> a + bi + cj + dk</code>. 
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
883
										
fpa's avatar
fpa committed
884
885
										<p>It is also possible to set both values at once using a pose (position and orientation)
										<pre><code>va.set_sound_source_pose( S, [ x y z ], [ a b c d ] )</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
886
887
										</p>
										
fpa's avatar
fpa committed
888
										<p>You may also use a special view-and-up vector orientation, where the default view vector points towards negative Z direction and the up vector points towards positive Y direction according to a right-handed OpenGL coordinate system.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
889
890
891
892
										<pre><code>va.set_sound_source_orientation_view_up( S, [ vx vy vz ], [ ux uy uz ] )</code></pre>
										</p>
										
										<p>The corresponding getter functions are
fpa's avatar
fpa committed
893
894
										<pre><code>p = va.get_sound_source_position( S )
q = va.get_sound_source_orientation( S )
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
895
896
897
[ p, q ] = va.get_sound_source_pose( S )
[ v, u ] = va.get_sound_source_orientation_view_up( S )
</code></pre>
fpa's avatar
fpa committed
898
with <code>p = [x y z]'</code>, <code>q = [a b c d]'</code>, <code>v = [vx vy vz]'</code>, and <code>u = [ux uy uz]'</code>, where <code>'</code> symbolizes the vector transpose. 
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
899
										</p>
fpa's avatar
fpa committed
900
901
																			
										<p>To get or set the name of a sound source, use
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
902
903
904
905
906
										<pre><code>va.set_sound_source_name( S, 'AnotherCar' )
sound_source_name = va.get_sound_source_name( S )
</code></pre>
										</p>
										
fpa's avatar
fpa committed
907
										<p>Specific parameter structs can be set or retrieved. They depend on special features and are used for prototyping, for example, if sound sources require additional values for new renderers.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
908
909
910
911
912
913
914
915
916
										<pre><code>va.set_sound_source_parameters( S, params )
params = va.get_sound_source_parameters( S )
</code></pre>
										</p>
										
										<p>The auralization mode can be modified and returned using
										<pre><code>va.set_sound_source_auralization_mode( S, '+DS' )
am = va.get_sound_source_auralization_mode( S )
</code></pre>
fpa's avatar
fpa committed
917
This call would, for example, activate the direct sound. Other variants include
918
919
920
921
922
923
										<pre><code>va.set_sound_source_auralization_mode( S, '-DS' )
va.set_sound_source_auralization_mode( S, 'DS, IS, DD' )
va.set_sound_source_auralization_mode( S, 'ALL' )
va.set_sound_source_auralization_mode( S, 'NONE' )
va.set_sound_source_auralization_mode( S, '' )
</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
924
925
										</p>
										
fpa's avatar
fpa committed
926
										<p>Sound sources can be assigned a directivity with a numerical identifier by
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
927
928
929
930
931
932
933
934
935
936
937
938
										<pre><code>va.set_sound_source_directivity( S, D )
D = va.get_sound_source_directivity( S )
</code></pre>
The handling of directivities is described below in the input data section.
										</p>
										
										<p>To mute (true) and unmute (false) a source, type
										<pre><code>va.set_sound_source_muted( S, true )
mute_state = va.get_sound_source_muted( S )
</code></pre>
										</p>
										
fpa's avatar
fpa committed
939
										<p>To control the level of a sound source, assign the sound power in watts
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
940
941
942
										<pre><code>va.set_sound_source_sound_power( S, P )
P = va.get_sound_source_sound_power( S )
</code></pre>
fpa's avatar
fpa committed
943
The default value of <b>31.67 mW (65 dB re 10e-12 Watts)</b> corresponds to <b>1 Pascal (94.0 dB SPL re 20e-6 Pascal ) in a distance of 1 m</b> for spherical spreading. The final gain of a sound source is linked to the input signal, which is explained below. However, a <b>digital signal with an RMS value of 1.0</b> (e.g., a sine wave with peak value of sqrt(2)) will retain 94 dB SPL @ 1m. A directivity may alter this value for a certain direction, but a calibrated directivity will not change the overall excited sound power of the sound source when integrating over a hull.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
944
945
946
947
948
949
										</p>
										
										<p>A list of all available sound sources returns the function
										<pre><code>source_ids = va.get_sound_source_ids()</code></pre>
										</p>
										
950
										<p>Sound sources can be deleted with
fpa's avatar
fpa committed
951
										<pre><code>va.delete_sound_source( S )</code></pre>
952
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
953
										
954
955
										<p>In contrast to all other sound objects, sound sources can be assigned a <b>signal source</b>. It feeds the sound pressure time series for that source and is referred to as the <i>signal</i> (speech, music, sounds). See below for more information on signal sources. The combination with the sound power and the directivity (if assigned), the signal source influences the time-dependent sound emitted from the source. For a calibrated auralization, the combination of the three components have to match physically.
										<pre><code>va.set_sound_source_signal_source( sound_source_id, signal_source_id )</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
956
957
958
										</p>
										
										
959
960
961
962
963
<h4>Sound receivers</h4>
<p>Except for the sound power method and the signal source adapter, all sound source methods are equally valid for sound receivers (see above). Just substitute <code>source</code> with <code>receiver</code>. A receiver can also be a human listener, in which case the <i>receiver directivity</i> will be an <b>HRTF</b>.
<br />
<br />
The VA interfaces provides some special features for receivers that are meaningful only in binaural technology. The head-above-torso orientation (HATO) of a human listener can be set and received as quaternion by the methods
fpa's avatar
fpa committed
964
<pre><code>va.set_sound_receiver_head_above_torso_orientation( sound_receiver_id, [ a b c d ] )
965
966
q = va.get_sound_receiver_head_above_torso_orientation( sound_receiver_id )</code></pre>

fpa's avatar
fpa committed
967
In common datasets like the <b>FABIAN HRTF</b> dataset (can be obtained from the <a href="http://www.opendaff.org" target="_blank">OpenDAFF project website</a>), only a certain range within the horizontal plane (around positive Y axis according to right-handed, Cartesian OpenGL coordinates) is present, that accounts for simplified head rotations with a fixed torso. Many listening experiments are conducted in a fixed seat and the user's head orientation is tracked. Here, a HATO HRTF appears more suitable, at least if an artificial head is used.
968
<br /><br />
fpa's avatar
fpa committed
969
970
Additionally, in Virtual Reality applications with loudspeaker-based setups, user motion is typically tracked inside a specific area. Some reproduction systems require knowledge on the exact position of the user's head and torso to apply adaptive sweet spot handling (like cross-talk cancellation). The VA interface therefore includes some receiver-oriented methods that extend the virtual pose with a so called real-world pose. Hardware in a lab and the user's absolute position and orientation (pose) should be set using one of the following setters
<pre><code>va.set_sound_receiver_real_world_pose( sound_receiver_id, [ x y z ], [ a b c d ] )
971
972
va.set_sound_receiver_real_world_position_orientation_vu( sound_receiver_id, [ x y z ], [ vx vy vz ], [ ux uy uz ] )
</code></pre>
fpa's avatar
fpa committed
973
Corresponding getters are
974
975
976
<pre><code>[ p, q ] = va.get_sound_receiver_real_world_pose( sound_receiver_id )
[ p, v, u ] = va.get_sound_receiver_real_world_position_orientation_vu( sound_receiver_id )
</code></pre>
fpa's avatar
fpa committed
977
Also, HATOs are supported (in case a future reproduction module makes use of HATO HRTFs)
978
979
980
981
<pre><code>va.set_sound_receiver_real_world_head_above_torso_orientation( sound_receiver_id, [ x y z w ] )
q = va.set_sound_receiver_real_world_head_above_torso_orientation( sound_receiver_id )
</code></pre>
</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
982
										
983
																				
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
984
										<h4>Sound portals</h4>
fpa's avatar
fpa committed
985
										<p>Sound portals have been added to the interface for future usage but are currently not supported by the available renderer. Their main purpose will include building acoustics applications, where portals are combined to form flanking transmissions through walls and ducts.</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
986
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
987
																				
988
989
990
991
992
993
994
995
996
997
998
<h4>Signal sources</h4>
<p>
Sound signals or signal sources represent the sound pressure time series that are emitted by a source.<br />
Some are <i>unmanaged</i> and are directly available, others have to be created. To get a list with detailed information on currently available signal sources (including those created at runtime), type
</p>
<pre><code>va.get_signal_source_infos()</code></pre>

<p>
In general, a <i>signal source</i> is attached to one ore many <i>sound sources</i> like this:</p>										
<pre><code>va.set_sound_source_signal_source( sound_source_id, signal_source_id )</code></pre>

Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
999
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1000
										<h5>Buffer signal source</h5>
fpa's avatar
fpa committed
1001
										<p>Audio files that can be attached to sound sources are usually single channel anechoic WAV files. In VA, an audio clip can be loaded as a <b>buffer signal source</b> with special control mechanisms. It supports macros and uses the search paths to locate a file. Using relative paths is highly recommended. Two examples are provided in the following:
1002
										<pre><code>signal_source_id = va.create_signal_source_buffer_from_file( 'filename.wav' )<br >demo_signal_source_id = va.create_signal_source_buffer_from_file( '$(DemoSound)' )</code></pre>
fpa's avatar
fpa committed
1003
										The <code>DemoSound</code> macro points to the 'Welcome to Virtual Acoustics' anechoically recorded file in WAV format, which resides in the common <code>data</code> folder. Make sure that the VA application can find the common <code>data</code> folder, which is also added as a search path in the default configurations.
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
										<br /><br />
										Now, the signal source can be attached to a sound source using
										<pre><code>va.set_sound_source_signal_source( sound_source_id, signal_source_id )</code></pre>
										Any buffer signal source can be started, stopped and paused. Also, it can be set to looping or non-looping mode (default).										
										<pre><code>va.set_signal_source_buffer_playback_action( signal_source_id, 'play' )
va.set_signal_source_buffer_playback_action( signal_source_id, 'pause' )
va.set_signal_source_buffer_playback_action( signal_source_id, 'stop' )
va.set_signal_source_buffer_looping( signal_source_id, true )
</code></pre>

										To receive the current state of the buffer signal source, use
										<pre><code>playback_state = va.get_signal_source_buffer_playback_state( signal_source_id )</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1016
										</p>
1017
										
1018
1019
<h5>Input device signal sources</h5>
<p>
fpa's avatar
fpa committed
1020
Input channels from the sound card can be directly used as signal sources (microphones, electrical instruments, etc) and are <i>unmanaged</i> (can not be created or deleted). All channels are made available individually on startup and are integrated as list of signal sources by
1021
1022
1023
1024
1025
</p>
<pre><code>va.set_sound_source_signal_source( sound_source_id, 'inputdevice1' )</code></pre>
<p>
for the first channel, and so on.
</p>
1026
1027
										
										<!--
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1028
										<h5>Machine signal source</h5>
1029
1030
										<p>
										In VA, a machine signal source is an assembly of three audio clips that are stored in a buffer with usage that is similar to a single looping audio file. The main difference is, that a machine signal source comprises a start, an idle and a stop sequence. A machine will start with the start sound, move over to the idle sound and will cross-fade to the stop sound on request. It can also be tuned by setting a playback speed (or RPM in case of a rotatory engine) if required.
1031
1032
										<br /><br />
										@todo
1033
										</p>
1034
										-->
1035
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1036
										<h5>Text-to-speech (TTS) signal source</h5>
1037
										<p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1038
										The TTS signal source allows to generate speech from text input. Because it uses the commercial <i>CereProc's CereVoice</i> third party library, it is not included in the VA package for public download. However, if you have access to the <i>CereVoice</i> library and can build VA with TTS support, this is how it works in <code>Matlab</code>:
1039
1040
1041
1042
1043
1044
1045
1046
<pre><code>tts_signal_source = va.create_signal_source_text_to_speech( 'Heathers beautiful voice' )
tts_in = struct();
tts_in.voice = 'Heather';
tts_in.id = 'id_welcome_to_va';
tts_in.prepare_text = 'welcome to virtual acoustics';
tts_in.direct_playback = true;
va.set_signal_source_parameters( tts_signal_source, tts_in )
</code></pre>
fpa's avatar
fpa committed
1047
Do not forget that a signal source can only be auralized in combination with a sound source. For more information, refer to the <a href="https://git.rwth-aachen.de/ita/toolbox/blob/master/applications/VirtualAcoustics/VA/itaVA_example_text_to_speech.m">text-to-speech example</a> in the <a href="http://www.ita-toolbox.org" target="_blank">ITA-Toolbox for Matlab</a>.
1048
1049
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1050
										<h5>Other signal sources</h5>
fpa's avatar
fpa committed
1051
										<p>VA also provides specialized signal sources which can not be covered in detail in this introduction. Please refer to the source code for proper usage.</p>
1052
1053
										
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1054
										<h4>Scenes</h4>
fpa's avatar
fpa committed
1055
										<p>Scenes are a prototype-like definition to allow renderers to act differently depending on the requested scene identifier. This is useful when implementing different behaviour based on a user-triggered scene that should be loaded as, for example, a room acoustic situation or a city soundscape. Most renderers will ignore these calls, but renderers like the room acoustics renderer uses this concept as long as direct geometry handling is not fully implemented.</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1056
1057
										
								
1058
1059
										<h4>Directivities (including HRTFs)</h4>
										<p>
fpa's avatar
fpa committed
1060
										Sound source and receiver directivities are usually made available as a file resource including multiple directions on a sphere for far-field usage. VA currently supports the OpenDAFF format with time domain and magnitude spectrum content type. They can be loaded with
1061
1062
										<pre><code>directivity_id = va.create_directivity_from_file( 'my_individual_hrtf.daff' )</code></pre>
										
fpa's avatar
fpa committed
1063
										VA ships with the <a href="http://www.akustik.rwth-aachen.de/go/id/pein/lidx/1" target="_blank">ITA artificial head HRTF dataset</a> (actually, the DAFF exports this dataset as HRIR in time domain), which is available under Creative Commons license for academic use.<br />
1064
1065
1066
										The default configuration files and Redstart sessions include this HRTF dataset as <code>DefaultHRIR</code> macro, and it can be created using
										<pre><code>directivity_id = va.create_directivity_from_file( '$(DefaultHRIR)' )</code></pre>
										Make sure that the VA application can find the common <code>data</code> folder, which is also added as an include path in default configurations.										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1067
1068
1069
1070
1071
										<br /><br />
										Directivities can be assigned to a source or receiver with
										<pre><code>va.set_sound_source_directivity( sound_source_id, directivity_id )<br />va.set_sound_receiver_directivity( sound_source_id, directivity_id )</code></pre>
										</p>
										
1072
<h4>Homogeneous medium</h4>
fpa's avatar
fpa committed
1073
<p>VA provides support for rudimentary homogeneous medium parameters that can be set by the user. The data is accessed by rendering and reproduction modules (mostly to receive the sound speed value for delay calculation). Values are always in SI units (meters, seconds, etc). Additionally, a user-defined set of parameters is provided in case a prototyping renderer requires further specialized medium information (may also be used for non-homogeneous definitions). Here is the overview of setters and getters:
1074
</p>
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100

<p>Speed of sound in m/s
<pre><code>va.set_homogeneous_medium_sound_speed( 343.0 )
sound_speed = va.get_homogeneous_medium_sound_speed()</code></pre></p>

<p>Temperature in degree Celsius
<pre><code>va.set_homogeneous_medium_temperature( 20.0 )
temperature = va.get_homogeneous_medium_temperature()</code></pre></p>

<p>Static pressure in Pascal, defaults to the norm atmosphere
<pre><code>va.set_homogeneous_medium_static_pressure( 101325.0 )
static_pressure = va.get_homogeneous_medium_static_pressure()</code></pre></p>

<p>Relative humidity in percentage (ranging from 0.0 to 100.0 or above)
<pre><code>va.set_homogeneous_medium_relative_humidity( 75.0 )
humidity = va.get_homogeneous_medium_relative_humidity()</code></pre></p>

<p>Medium shift / 3D wind speed in m/s
<pre><code>va.set_homogeneous_medium_shift_speed( [ x y z ] )
shift_speed = va.get_homogeneous_medium_relative_humidity()</code></pre></p>

<p>Prototyping parameters (user-defined struct)
<pre><code>va.set_homogeneous_medium_parameters( medium_params )
medium_params = va.get_homogeneous_medium_relative_humidity()</code></pre></p>

</br>
1101
1102
1103
										
										
										<h4>Geometry</h4>
fpa's avatar
fpa committed
1104
										<p>Geometry interface calls are for future use and are currently not supported by the available renderers. The concept behind geometry handling is real-time environment manipulation for indoor and outdoor scenarios using VR technology like Unity or plugin adapters from CAD modelling applications like SketchUp.</p>
1105
1106
										
										<h4>Acoustic materials</h4>
fpa's avatar
fpa committed
1107
										<p>Acoustic material interface calls are for future use and are currently not supported by available renderers. Materials are closely connected to geometry, as a geometrical surface can be linked to acoustic properties represented by the material.</p>
1108
										
1109
1110
<h4>Solving synchronisation issues</h4>
<p>
fpa's avatar
fpa committed
1111
Scripting languages like Matlab are problematic by nature when it comes to timing: evaluation duration scatters unpredictability and timers are not precise enough. This becomes a major issue when, for example, a continuous motion of a sound source should be performed with a clean Doppler shift. A simple loop with a timeout will result in audible motion jitter as the timing for each loop body execution is significantly diverging. Also, if a music band should start playing at the same time and the start is executed by subsequent scripting lines, it is very likely that they end up out of sync.
1112
1113
</p>

fpa's avatar
fpa committed
1114
<h5>High-performance timeout</h5>
1115
<p>
fpa's avatar
fpa committed
1116
To avoid timing problems, the VA Matlab binding provides a high-performance timer that is implemented in C++. It should be used wherever a synchronous update is required, mostly for moving sound sources or sound receivers. An example for a properly synchronized update loop at 60 Hertz that incrementally drives a source from the origin into positive X direction until it is 100 meters away:
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
</p>
<pre><code>S = va.create_sound_source()

va.set_timer( 1 / 60 )
x = 0
while( x < 100 )
	va.wait_for_timer;
	va.set_sound_source_position( S, [ x 0 0 ] )
	x = x + 0.01
end

va.delete_sound_source( S )
</code></pre>


fpa's avatar
fpa committed
1132
<h5>Synchronizing multiple updates</h5>
1133
<p>
fpa's avatar
fpa committed
1134
VA can execute updates synchronously in the granularity of the block rate of the audio stream process. Every scene update will be withhold until the update is unlocked. This feature is mainly used for simultaneous playback start.
1135
1136
1137
1138
1139
1140
1141
1142
1143
<pre><code>va.lock_update
va.set_signal_source_buffer_playback_action( drums, 'play' )
va.set_signal_source_buffer_playback_action( keys, 'play' )
va.set_signal_source_buffer_playback_action( base, 'play' )
va.set_signal_source_buffer_playback_action( sax, 'play' )
va.set_signal_source_buffer_playback_action( vocals, 'play' )
va.unlock_update
</code></pre>

fpa's avatar
fpa committed
1144
<p>It is also useful for uniform movements of spatially static sound sources (like a vehicle with four wheels). However, locking updates will inevitably lock out other clients (like trackers) and should be released as soon as possible.
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
</p>
<pre><code>va.lock_update
va.set_sound_source_position( wheel1, p1 )
va.set_sound_source_position( wheel2, p2 )
va.set_sound_source_position( wheel3, p3 )
va.set_sound_source_position( wheel4, p4 )
va.unlock_update
</code></pre>


Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1155
1156
1157
									</section>		

<hr />									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1158
									<section id="rendering">
1159
1160
1161
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
1162
1163
<h3>Audio rendering</h3>
<p>
fpa's avatar
fpa committed
1164
Audio rendering, next to reproduction, is the heart of VA. Rendering instances combine user information to auralize sound, in a unique way and with a dedicated purpose. Audio renderers are informed by the VA core about scene changes (asynchronous updates) which are triggered by the user. The task of each rendering instance is to adapt the requested changes as fast as possible.
1165
<br /><br />
fpa's avatar
fpa committed
1166
Rendering modules work pretty much on their own. They feature, however, some common and some specialized methods for interaction.
1167
1168
1169
1170

</p>
<p>
To get a list of available modules, use
1171
<pre><code>renderer_ids = va.get_rendering_modules()</code></pre>
1172
</p>
1173
										
1174
<p>Every rendering instance can be muted/unmuted and the output gain can be controlled.
1175
1176
1177
1178
1179
1180
<pre><code>va.set_rendering_module_muted( renderer_id, true )
va.set_rendering_module_gain( renderer_id, 1.0 )
mute_state = va.get_rendering_module_muted( renderer_id )
gain = va.get_rendering_module_gain( renderer_id )</code></pre>
</p>								
	
fpa's avatar
fpa committed
1181
<p>Renderers may also be masked by auralization modes. To enable or disable certain auralization modes, use for example
1182
1183
<pre><code>va.set_rendering_module_auralization_mode( renderer_id, '-DS' )
va.set_rendering_module_auralization_mode( renderer_id, '+DS' )</code></pre>
1184
1185
1186
</p>

<p>To obtain and set parameters, type
1187
1188
<pre><code>va.set_rendering_module_parameters( renderer_id, in_params )
out_params = va.get_rendering_module_parameters( renderer_id, request_params )</code></pre>
1189
1190
1191
The <code>request_params</code> can usually be empty, but if a key <code>help</code> or <code>info</code> is present, the rendering module will provide usage information.
</p>

fpa's avatar
fpa committed
1192
<p>A special feature that has been requested for Virtual Reality (background music, instructional speech, operator's voice) provides a sound source and sound receiver create method that will only be effective for the given <i>rendering instance</i> (explicit renderer). This is required if ambient clips should be played back without spatialization, or if certain circumstances demand that a source is only processed by one single renderer. In this way, computational power can be saved.
1193
1194
<pre><code>sound_source_id = va.create_sound_source_explicit_renderer( renderer_id, 'HitButtonEffect' )
sound_receiver_id = va.create_sound_receiver_explicit_renderer( renderer_id, 'SurveillanceCamMic' )</code></pre>
fpa's avatar
fpa committed
1195
The sound sources and receivers created with this method are handled like normal entities but are only effective for the explicit rendering instance.
1196
1197
1198
1199
</p>

<h4>Binaural free field renderer</h4>
<p>
fpa's avatar
fpa committed
1200
1201
For a proper time synchronization of this renderer with other renderers, a static delay which is added to the propagation delay simulation can be set.
This static delay is defined by a special parameter using a struct. In the following example, it is set to 100ms.
1202
<pre><code>in_struct = struct()
1203
in_struct.AdditionalStaticDelaySeconds = 0.100
1204
va.set_rendering_module_parameters( renderer_id, in_struct )</code></pre>
1205
1206
1207
</p>

<p>
fpa's avatar
fpa committed
1208
Special features of this renderer include individualized HRIRs. The anthropometric parameters are derived from a specific key/value layout of the receiver parameters combined under the key <code>anthroparams</code>. All parameters are provided in units of meters.
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
<pre><code>in_struct = struct()
in_struct.anthroparams = struct()
in_struct.anthroparams.headwidth = 0.12
in_struct.anthroparams.headheight = 0.10
in_struct.anthroparams.headdepth = 0.15
va.set_sound_receiver_parameters( sound_receiver_id, in_struct )</code></pre>
</p>

<p>
The current anthropometric parameters can be obtained by
<pre><code>params = va.get_sound_receiver_parameters( sound_receiver_id, struct() )
disp( params.anthroparams )</code></pre>
1221
1222
</p>

1223
1224
1225

<h4>Prototype generic path renderer</h4>
<p>