start.html 87.3 KB
Newer Older
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<!DOCTYPE HTML>
<!--
	Landed by HTML5 UP
	html5up.net | @ajlkn
	Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
	<head>
		<title>Virtual Acoustics</title>
		<meta charset="utf-8" />
		<meta name="viewport" content="width=device-width, initial-scale=1" />
		<!--[if lte IE 8]><script src="assets/js/ie/html5shiv.js"></script><![endif]-->
		<link rel="stylesheet" href="assets/css/main.css" />
		<!--[if lte IE 9]><link rel="stylesheet" href="assets/css/ie9.css" /><![endif]-->
		<!--[if lte IE 8]><link rel="stylesheet" href="assets/css/ie8.css" /><![endif]-->
	</head>
	<body>
		<div id="page-wrapper">

			<!-- Header -->
				<header id="header">
					<h1 id="logo"><a href="index.html">Start</a></h1>
					<nav id="nav">
						<ul>
							<li>
								<a href="#">Quick access</a>
								<ul>
									<li><a href="overview.html">Overview</a></li>
									<li><a href="download.html">Download</a></li>
									<li><a href="documentation.html">Documentation</a></li>
									<li>
32
										<a href="start.html">Getting started</a>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
										<ul>
											<li><a href="start.html#configuration">Configuration</a></li>
											<li><a href="start.html#control">Control</a></li>
											<li><a href="start.html#scene_handling">Scene handling</a></li>
											<li><a href="start.html#inputdata">Input data</a></li>
											<li><a href="start.html#rendering">Rendering</a></li>
											<li><a href="start.html#reproduction">Reproduction</a></li>
											<li><a href="start.html#tracking">Tracking</a></li>
										</ul>
									</li>
									<li>
										<a href="help.html">Get help</a>
										<ul>
											<li><a href="help.html#faq">FAQ</a></li>
											<li><a href="help.html#issue_tracker">Issue tracker</a></li>
											<li><a href="help.html#community">Community</a></li>
											<li><a href="help.html#nosupport">No support</a></li>
										</ul>
									</li>
									<li>
										<a href="developers.html">Developers</a>
										<ul>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
55
56
57
58
											<li><a href="developers.html#api">C++ API</a></li>
											<li><a href="developers.html#dependencies">Dependencies</a></li>
											<li><a href="developers.html#configuration">Configuration</a></li>
											<li><a href="developers.html#build_guide">Build guide</a></li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
59
60
61
											<li><a href="developers.html#repositories">Repositories</a></li>
										</ul>
									</li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
62
63
64
65
66
67
68
69
									<li>
										<a href="research.html">Research</a>
										<ul>
											<li><a href="research.html#system">System papers</a></li>
											<li><a href="research.html#technology">Technology papers</a></li>
											<li><a href="research.html#applied">Applied papers</a></li>
										</ul>
									</li>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
70
71
72
73
74
75
76
77
78
79
80
81
82
								</ul>
							</li>
							<li><a href="legal.html">Legal notice</a></li>
							<!--<li><a href="#" class="button special">Sign Up</a></li>-->
						</ul>
					</nav>
				</header>


			<!-- Main -->
				<div id="main" class="wrapper style1">
					<div class="container">
						<header class="major">
83
84
							<h2>Getting started</h2>
							<p>Auralization with Virtual Acoustics <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
85
86
							<span style="font-size: 0.6em">This content is available under <a href="http://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></span>
							</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
87
						</header>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
88
						
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
89
								<!-- Content -->
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
90
								
91
									<section id="preface">
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
92
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
93
										<h3>Preface</h3>
fpa's avatar
fpa committed
94
										<p>Virtual Acoustics is a powerful tool for the auralization of virtual acoustic scenes and the reproduction thereof. Getting started with VA includes three important steps
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
95
96
										<strong><ul><li>Configuring the application</li><li>Controlling the core</li><li>Setting up a scene</li></ul></strong>
										
fpa's avatar
fpa committed
97
										The overall design goal aimed at keeping things as simple as possible. However, certain circumstances do not allow further simplicity due to their complexity by nature. VA addresses professionals and is mainly used by scientists. Important features are never traded for convenience if the system's integrity is at stake. Hence, getting everything out of VA will require profound understanding of the technologies involved. It is designed to offer highest flexibility which comes at the price of a demanding configuration. At the beginning, configuring VA is not trivial especially if a loudspeaker-based audio reproduction shall be used. <br /><br />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
98
99
										
										The usage of VA can often be divided into two user groups
fpa's avatar
fpa committed
100
										<strong><ul><li>those who seek for quick experiments with spatial audio and are happy with conventional playback over headphones</li>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
101
102
										<li>those who want to employ VA for a sophisticated loudspeaker setup for (multi modal) listening experiments and Virtual Reality applications</li></ul></strong>
										
fpa's avatar
fpa committed
103
										For the first group of users, there are some simple setups that will already suffice for most of the things you aspire. Such setups include, for example, a configuration for binaural audio rendering over a non-equalized off-the-shelf pair of headphones. Another configuration example contains a self-crafted interactive rendering application that exchanges pre-recorded or simulated FIR filters using Matlab or Python scripts for different purposes such as room acoustic simulations, building acoustics, A/B live switching tests to assess the influence of equalization. The configuration effort is minimal and works out of the box if you use the Redstart applications or start a VA command line server with the corresponding core configuration file. If you consider yourself as part of this group of users skip the configuration part and <a href="#examples">have a look at the examples</a>. Thereafter, read the <a href="#control"> control section</a> and the <a href="#scene_handling">scene handling section</a><br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
104
										<br />
fpa's avatar
fpa committed
105
										If you are willing to dive deeper into the VA framework you are probably interested in how to adapt the software package for your purposes. The following sections will describe how you can set up VA for your goal from the very beginning.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
106
										</p>
107
108
									</section>
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
109
110
									<hr />
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
111
									
112
									<section id="configuration">
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
113
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
114
115
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
116
										<!--<a href="#" class="image fit"><img src="images/pic05.jpg" alt="" /></a>-->
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
117
										<h3>Virtual Acoustics configuration</h3>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
118
									
fpa's avatar
fpa committed
119
									<p>VA can be configured using a section-based key-value parameter collection which is passed on to the core instance during initialization. This is usually done by providing a path to a text-based INI file which will be referred to as <code>VACore.ini</code> but can be of arbitrary name. If you use the <code>VAServer</code> application you will work with this file only. If you only use the <code>Redstart</code> GUI application you will probably never use it. However, the INI file can be exported from a Redstart session in case you need it.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
120
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
121

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
122
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
123
										<h4>Basic configuration</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
124
										<h5>Paths</h5>
fpa's avatar
fpa committed
125
										<p>The <code>Paths</code> section allows for adding search paths to the core. If resources like head-related transfer functions (HRTFs), geometry files, or audio files, are required these search paths guarantee to locate the requested files. Relative paths are resolved from the execution folder where the VA server application is started from. When using the provided batch start scripts on Windows it is recommended to add <code>data</code> and <code>conf</code> folders.</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
126
127
128
129
130
131
132
133
134
135
136
137
138
										<p>
<pre><code>[Paths]

data = data
conf = conf

my_data = C:/Users/Me/Documents/AuralizationData
my_other_data = /home/me/auralization/input

</code></pre>
										</p>
										
										<h5>Files</h5>
fpa's avatar
fpa committed
139
										<p>In the <code>Files</code> section, you can name files that will be included as further configuration files. This is helpful when certain configuration sections must be <i>outsourced</i> to be reused efficiently. Outsourcing is especially convenient when switching between static sections like hardware descriptions for laboratories or setups, but can also be used for rendering and reproduction modules (see below). Avoid copying larger configuration sections that are  re-used frequently. Use different configuration files, instead.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
140
141
142
143
144
145
146
147
148
149
										<p>
<pre><code>[Files]

old_lab = VASetup.OldLab.Loudspeakers.ini
#new_lab = VASetup.NewLab.Loudspeakers.ini

</code></pre>
										</p>
										
										<h5>Macros</h5>
fpa's avatar
fpa committed
150
										<p>The <code>Macros</code> section is helpful to write tidy scripts. Use macros if it is not explicitly required to use a specific input file. For example, if any HRTF can be used for a receiver in the virtual scene the <code>DefaultHRIR</code> will point to the default HRTF data set, or head-related impulse response (HRIR) in time domain. Any defined macros will be replaced through a given value by the core.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
151
										Usage: "$(MyMacroName)/file.abc" -> "MyValue/file.abc"<br />
fpa's avatar
fpa committed
152
153
154
										Macros are substituted forwardly by key name order (use with care), and otherwise stay untouched: A = B; C = $(A) -> $(C) is B<br />
										The example macros provided below are a good practice set which should be present in a configuration file in order to keep the example scripts valid.<br />
										Macros are also very helpful if certain exported file prefixes are desired, e.g., to get better structured file names for input and output recordings.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
										<p>
<pre><code>[Macros]

DefaultHRIR = HRIR/ITA-Kunstkopf_HRIR_AP11_Pressure_Equalized_3x3_256.v17.ir.daff
HumanDir = Directivity/Singer.v17.ms.daff
Trumpet = Directivity/Trumpet1.v17.ms.daff

# Define some other macros (examples)
ProjectName = MyVirtualAcousticsProject

</code></pre>
										</p>
										
										
										<h5>Debug</h5>
fpa's avatar
fpa committed
170
171
										<p>The <code>Debug</code> section configures the initial behavior of the core as, for example, log level and input/output recording. If input and output recording is enabled the entire channel number of your physical or abstract device will be logged. For devices with a lot of digital inputs and outputs, the channel count may reach up to 256 channels, the maximum channel number as defined per WAV format. Additionally, the data is stored as PCM data at a resolution of 32 bit leading to high storage requirements. To avoid such excessive storage demands, only use this option if absolutely necessary. Otherwise it is recommended to only record the output channels which were set, for example, in the playback modules (see below).<br />
										In the following, some macros are used (see Macros section above).
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
172
173
174
										<p>
<pre><code>[Debug]

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
175
# Record device input and store to hard drive (will record every available input channel)
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
176
177
178
OutputRecordEnabled = false
OutputRecordFilePath = $(ProjectName)_in.wav

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
179
# Record device output and store to hard drive (will record every available output channel)
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
180
181
182
183
184
185
186
187
InputRecordEnabled = false
OutputRecordFilePath = $(ProjectName)_out.wav

# Set log level: 0 = quiet; 1 = errors; 2 = warnings (default); 3 = info; 4 = verbose; 5 = trace;
LogLevel = 3

</code></pre>
										</p>
188
189
190
191
192



<h4>Calibration</h4>
<p>
fpa's avatar
fpa committed
193
To properly calibrate a rendering and reproduction system, every component in the chain has to be carefully configured. Hence the lack of being scaled by physical means, digital signals stored, for example, in a WAV file or in the buffers of the sound card, a reference point enabling a proper calibration was set. In VA, a digital value of 1.0 refers to 1 Pascal at a distance of 1 m per default. For example, a sine wave with peak value of \sqrt(2)) will retain 94 dB SPL at a distance of 1m. But this value can also be changed to <b>124 dB</b> if lower amplitudes are necessary (and a sample type conversion from float to integer is performed along the output chain). This makes it necessary to use a powerful amplifier facilitating the reproduction of small sample values. Setting the internal conversion value to 124 dB avoids clipping at high values (but introduces a higher noise floor). To do so, include the following section into the configuration (the clarification comment can be dropped):
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
</p>

<p>
<pre><code>[Calibration]

# The amplitude calibration mode either sets the internal conversion from
# sound pressure to an electrical or digital amplitude signal (audio stream)
# to 94dB (default) or to 124dB. The rendering modules will use this calibration
# mode to calculate from physical values to an amplitude that can be forwarded
# to the reproduction modules. If a reproduction module operates in calibrated
# mode, the resulting physical sound pressure at receiver location can be maintained.

DefaultAmplitudeCalibrationMode = 94dB

</code></pre>
</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
210
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
211
212
										
										<h4>Audio interface configuration</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
213
										<p>
fpa's avatar
fpa committed
214
										The audio interface controls the backend driver and the device. In the current version, for the <code>Driver</code> backend key, <code>ASIO</code> is supported on Windows only, whereas <code>Portaudio</code> is available on all platforms. By default, Portaudio with the default driver is used that usually produces audible sound without further ado. However, the block sizes are high and the update rates are not sufficient for real-time auralization using motion tracking. Therefore, dedicated hardware and small block sizes should be used - and ASIO is recommended for Windows platforms.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
215
216
217
218
219
220
221
222
223
224
225
226
										</p>
										
										<h5>ASIO example using ASIO4ALL v2</h5>
										<p>
										<a href="http://www.asio4all.de" target="_blank">ASIO4ALL</a> is a useful and well-implemented intermediate layer for audio I/O making it possible to use ASIO drivers for the internal hardware (and any other audio device available). It must be installed on the PC, first.
<pre><code>[Audio driver]

Driver = ASIO
Samplerate = 44100
Buffersize = AUTO
Device = ASIO4ALL v2
</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
227
Although it appears that the buffer size can be defined for ASIO devices, the ASIO backend will automatically detect the buffer size that has been configured by the driver when the <code>AUTO</code> value is set (recommended). Set the buffer size in the ASIO driver dialog of your physical device, instead. Make sure, that the sampling rates are matching.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
ASIO requires a device name to be defined by each driver host. Further common hardware device names are
</p>

								<div class="table-wrapper">
									<table class="alt">
										<thead>
											<tr>
												<th width="16%">Manufacturer</th>
												<th width="32%">Device</th>
												<th>ASIO device name</th>
											</tr>
										</thead>
										<tbody>
											<tr>
												<td><b>RME</b></td>
												<td><i>Hammerfall DSP</i></td>
												<td><code>ASIO Hammerfall DSP</code></td>
											</tr>
											<tr>
												<td><b>RME</b></td>
												<td><i>Fireface USB</i></td>
												<td><code>ASIO Fireface USB</code></td>
											</tr>
											<tr>
												<td><b>RME</b></td>
												<td><i>MADIFace USB</i></td>
												<td><code>ASIO MADIface USB</code></td>
											</tr>
											<tr>
												<td><b>Focusrite</b></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
258
												<td><i>2i2, 2i4, ...</i></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
259
260
261
262
263
264
265
266
267
												<td><code>Focusrite USB 2.0 Audio Driver</code></td>
											</tr>
											<tr>
												<td><b>M-Audio</b></td>
												<td><i>Fast Track Ultra</i></td>
												<td><code>M-Audio Fast Track Ultra ASIO</code></td>
											</tr>
											<tr>
												<td><b>Steinberg</b></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
268
												<td><i>6UR22 MK2</i></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
269
270
												<td><code>Yamaha Steinberg USB ASIO</code></td>
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
271
272
273
274
275
											<tr>
												<td><b>Realtek</b></td>
												<td><i>Realtek Audio HD</i></td>
												<td><code>Realtek ASIO</code></td>
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
276
277
278
											<tr>
												<td><b>Zoom</b></td>
												<td><i>H6</i></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
279
280
281
282
283
284
												<td><code>ZOOM H and F Series ASIO</code></td>
											</tr>
											<tr>
												<td><b>ASIO4ALL</b></td>
												<td><i>any windows device</i></td>
												<td><code>ASIO4ALL v2</code></td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
285
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
286
287
288
289
290
291
											<tr>
												<td><b>Reaper (x64)</b></td>
												<td><i>any Reaper device</i></td>
												<td><code>ReaRoute ASIO (x64)<br /></code></td>
											</tr>
											
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
292
293
294
295
296
297
298
299
300
301
											
										</tbody>
										<tfoot>
											<tr>
												<td colspan="3">Table 1: Common ASIO device driver host names</td>
											</tr>
										</tfoot>
									</table>
								</div>
<p>
fpa's avatar
fpa committed
302
If you do not have any latency requirements you can also use <code>Portaudio</code> under Windows and other platforms. The specific device names of Portaudio interfaces can be detected, for example, using the VLC player or with Audacity. But the <code>default</code> device is recommended simply because it will pick the audio device that is also registered as the default device of your system. This is, what most people need anyway, and the system tools can be used to change the output device.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
303
304
305
306
307
308
309
310
311
If the <code>Buffersize</code> is unkown, at least the native buffer size of the audio device should be used (which is most likely <code>1024</code> for on-board chips). Otherwise, timing will behave oddly which has a negative side effect on the rendering.
<pre><code>[Audio driver]

Driver = Portaudio
Samplerate = 44100
Buffersize = 1024
Device = default
</code></pre>
</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
312

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
313
314
										
										<h4>Audio hardware configuration</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
315
										
fpa's avatar
fpa committed
316
317
318
										<p>The <code>Setup</code> section describes the hardware environment in detail. It might seem a bit over the top but the complex definition of hardware groups with logical and physical layers eases re-using of physical devices for special setups and also allows for multiple assignments - similar to the RME matrix concept of TotalMix, except that volume control and mute toggling can be manipulated in real-time using the VA interface instead of the ASIO control panel GUI.<br />
										The hardware configuration can be separated into inputs and outputs, but they are basically handled in the same manner. More importantly, the setup can be devided into <strong>devices of specialized types</strong> and <strong>groups that combine devices</strong>. Often, this concept is unnecessary and appears cumbersome, but there are situations where this level of complexity is required.<br />
										A <strong>device</strong> is a physical emmitter (<code>OutputDevice</code>) or transducer (<code>InputDevice</code>) with a fixed number of channels and assignment using (arbitrary but unique) channel indices. A broadband loudspeaker with one line input is a typical representative of the single channel <code>LS</code> type <code>OutputDevice</code>that has a fixed pose in space. A pair of headphones is assigned the type <code>HP</code> and usually has two channels, but no fixed pose in space.<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
319
320
										So far, there is only an input device type called <code>MIC</code> that has a single channel.
										<br /><br />
fpa's avatar
fpa committed
321
322
										Physical devices can not directly be used for a playback in VA. A reproduction module can rather be connected with one or many <code>Outputs</code> - logical groups of <code>OutputDevices</code>.<br />
										Again, for headphones this seems useless because a headphone device will be represented by a virtual group of only one device. However, for loudspeaker setups this makes sense as, for example, a setup of 7 loudspeakers for spatial reproduction may be used by different groups which combine only 5, 4, 3, or 2 of the available loudspeakers to form an output group. In this case, only the loudspeaker identifiers are required and channels and positions are made available by the physical device description. Following this strategy, repositioning of loudspeakers and re-assignment of channel indices is less error prone due to its organization in one configuration section, only.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
323
324
325
326
										</p>
										
										<h5>Headphone setup example</h5>
										<p>
fpa's avatar
fpa committed
327
										Let us assume you have a pair of Sennheiser HD 650 headphones at your disposal and you want to use it for binaural rendering and reproduction. This is the most common application of VA and will result in the following configuration:
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
328
329
330
331
332
333
334
<pre><code>[Setup]

[OutputDevice:SennheiserHD650]
Type = HP
Description = Sennheiser HD 650 headphone hardware device
Channels = 1,2

335
[Output:DesktopHP]
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
336
337
338
339
Description = Desktop user with headphones
Devices = SennheiserHD650

</code></pre>
fpa's avatar
fpa committed
340
If you want to use another output jack for some reason change your channels accordingly, say to <code>3,4</code>.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
341
342

										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
343
										
344
345
										<h5>Loudspeaker setup example</h5>
										<p>
fpa's avatar
fpa committed
346
										Let us assume you have a square-shaped loudspeaker setup of Neumann KH120 at your disposal. You want to use it for binaural rendering and reproduction. This is the a common application of VA for a dynamic listening experiment in a hearing booth. For this scenario, the configuration file may like this:
347
348
349
350
<pre><code>[Setup]

[OutputDevice:NeumannKH120_FL]
Type = LS
fpa's avatar
fpa committed
351
Description = Neumann KH 120 in front left corner of square
352
353
354
355
Channels = 1

[OutputDevice:NeumannKH120_FR]
Type = LS
fpa's avatar
fpa committed
356
Description = Neumann KH 120 in front right corner of square
357
358
359
360
Channels = 2

[OutputDevice:NeumannKH120_RR]
Type = LS
fpa's avatar
fpa committed
361
Description = Neumann KH 120 in rear right corner of square
362
363
364
365
Channels = 3

[OutputDevice:NeumannKH120_RL]
Type = LS
fpa's avatar
fpa committed
366
Description = Neumann KH 120 in rear left corner of square
367
368
369
370
371
372
373
Channels = 4

[Output:HearingBoothLabLS]
Description = Hearing booth laboratory loudspeaker setup
Devices = NeumannKH120_FL, NeumannKH120_FR, NeumannKH120_RR, NeumannKH120_RL

</code></pre>
fpa's avatar
fpa committed
374
Note: The order of devices in the output group is irrelevant for the final result. Each LS will receive the corresponding signal on the channel of the device.
375
376
377
378
379
380

										</p>
										
										
										<h5>Microphone setup example</h5>
										<p>
381
										The audio input configuration is similar to the output configuration but is not yet fully included in VA. If you want to use input channels as signal sources for a virtual sound source assign the provided unmanaged signals called <code>audioinput1, audioinput2, ... </code>. The number refers to the input channel index beginning with 1 and you can get the signals by using the getters <code>GetSignalSourceInfos</code> or <code>GetSignalSourceIDs</code>.
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
<pre><code>[Setup]

[InputDevice:NeumannTLM170]
Type = MIC
Description = Neumann TLM 170
Channels = 1

[Input:BodyMic]
Description = Hearing booth talk back microphone
Devices = NeumannTLM170

</code></pre>

										</p>
										
397
398
399
400
401
402
403
404
405
406
407
408
<h4>Homogeneous medium</h4>
<p>
To override default values concerning the homogeneous medium that is provided by VA, include the following section and modify the values to your needs (the default values are shown here).
</p>
<p>									
<pre><code>[HomogeneousMedium]

DefaultSoundSpeed = 344.0 # m/s
DefaultStaticPressure = 101125.0 # [Pa]
DefaultTemperature = 20.0 # [Degree centigrade]
DefaultRelativeHumidity = 20.0 # [Percent]
DefaultShiftSpeed = 0.0, 0.0, 0.0 # 3D vector in m/s</code></pre>
409
410
										
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
411
										<h4>Rendering module configuration</h4>
412
										<p>
413
										To instantiate a rendering module, a section with a <code>Renderer:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this rendering instance. If you want to change parameters during execution this identifier is required to call the instance. Although all renderers require some obligatory definitions, a detailed description is necessary for the specific parameter set. For typical renderers, some examples are given below.
414
										</p>
415
416
417
418
										<h5>Required rendering module parameters</h5>
										<p>
<pre><code>Class = RENDERING_CLASS
Reproductions = REPRODUCTION_INSTANCE(S)</pre></code>
419
420
The rendering class refers to the type of renderer which can be taken from the tables in the <a href="overview.html#rendering">overview</a> section.<br />
The section <code>Reproductions</code> describes how to configure connections to reproduction modules. At least one reproduction module has to be defined but the rendering stream can also be connected to multiple reproductions of same or different type (e.g., talkthrough, equalized headphones and cross-talk cancellation). The only restriction is that the rendering output channel number has to match the reproduction module's input channel number. This prevents connecting a two-channel binaural renderer with, for example, an Ambisonics reproduction which would take at least 4 channels.
421
422
423
424

										</p>
										<h5>Optional rendering module parameters</h5>
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
425
<pre><code>Description = Some informative description of this rendering module instance
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
426
427
428
Enabled = true
OutputDetectorEnabled = false
RecordOutputEnabled = false
429
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
430
</pre></code>
431
Rendering modules can be <i>enabled and disabled</i> to speed up setup changes without copying & pasting larger parts of a configuration section, as especially reproduction modules can only be instantiated if the sound card provides enough channels. This makes testing on a desktop PC and switching to a laboratory environment easier.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
432
<br />
433
For rendering modules, only the <i>output</i> can be observed. A stream detector for the output can be activated that will produce level meter values, for example, for a GUI widget. The output of the active listener can also be recorded and exported as a WAV file. Recording starts with initialization and is exported to the hard disc drive after finalization impliciting that data is kept in the RAM. If a high channel number is required and/or long recording sessions are planned it is recommended to route the output through a DAW, instead, i.e. with ASIO re-routing software devices like Reapers ReaRoute ASIO driver. To include a more versatile output file name (macros are allowed).
434
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
435

436
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
437
										<h5>Binaural free field renderer (class <code>BinauralFreeField</code>) example</h5>
438
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
439
										<p>
440
										This example with all available key/value configuration pairs is include in the default <code>VACore.ini</code> settings which is generated from the repository's <code>VACore.ini.proto</code> (by CMake). It requires a reproduction called <code>MyTalkthroughHeadphones</code>, shown further below.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
441
442
443
444
445
446
447
448
449
450
<pre><code>[Renderer:MyBinauralFreeField]
Class = BinauralFreeField
Enabled = true
Reproductions = MyTalkthroughHeadphones
HRIRFilterLength = 256
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.1
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
451
452
MotionModelLogInputReceiver = false
MotionModelLogEstimatedOutputReceiver = false
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
453
454
455
SwitchingAlgorithm = linear
OutputDetectorEnabled = false
RecordOutputEnabled = false
456
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav</pre></code>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
457

458
										A more detailed explanation of the motion model and further parameters are provided in the <a href="documentation">documentation</a> specifying how the rendering works.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
459
460
										</p>
										
461
462
463
										<h5>VBAP free field renderer (class <code>VBAPFreeField</code>) example</h5>
										
										<p>
464
										Requires <code>Output</code> (3-d positions of a loudspeaker setup) to render channel-based audio. Otherwise, it works similar to other free field renderers.
465
466
467
468
469
470
471
472
473
474
<pre><code>[Renderer:MyVBAPFreefield]
Class = VBAPFreeField
Enabled = true
Output = VRLab_Horizontal_LS
Reproductions = MixdownHeadphones</pre></code>
										</p>
										
										<h5>Ambisonics free field renderer (class <code>AmbisonicsFreeField</code>) example</h5>
										
										<p>
475
										Similar to binaural free field renderer, but evaluates receiver directions based on a decomposition into spherical harmonics with a specific order (<code>TruncationOrder</code>). It requires a reproduction called <code>MyAmbisonicsDecoder</code> which is shown further below.
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
<pre><code>[Renderer:MyAmbisonicsFreeField]
Class = AmbisonicsFreeField
Enabled = true
Reproductions = MyAmbisonicsDecoder
TruncationOrder = 3
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.1
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceiver = false
MotionModelLogEstimatedOutputReceiver = false
SwitchingAlgorithm = linear
OutputDetectorEnabled = false
RecordOutputEnabled = false
RecordOutputFilePath = MyRenderer_filename_may_including_$(ProjectName)_macro.wav</pre></code>

										</p>
										
										<h5>Ambient mixing renderer (class <code>AmbientMixer</code>) example</h5>
										<p>
497
										The ambient mixer takes the value of the key <code>OutputGroup</code> and accordingly sets the channel count for playback as subsequent reproduction modules require matching channels. However, an arbitrary number of reproduction modules can be specified, as shown in the following example.
498
499
500
501
502
503
504
505
506
507
<pre><code>[Renderer:MyAmbientMixer]
Class = AmbientMixer
Description = Low-cost renderer to make sound audible without spatializations
Enabled = true
OutputGroup = MyDesktopHP
Reproductions = MyDesktopHP, MySubwooferArray</pre></code>
										</p>
										
										<h5>Binaural artificial room acoustics renderer (class <code>BinauralArtificialReverb</code>) example</h5>
										<p>
fpa's avatar
fpa committed
508
										Values and angles are specified in SI units (e.g., seconds, meters, watts, etc.) and angles, respectively. The reverberation time may exceed the reverberation filter length (divided by the sampling rate) resulting in a cropped impulse response. This renderer requires and uses the sound receiver HRIR for spatialization and applies a sound power correction to match with direct sound energy if used together with the binaural free field renderer.
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
<pre><code>[Renderer:MyBinauralArtificialRoom]
Class = BinauralArtificialReverb
Description = Low-cost per receiver artificial reverberation effect
Enabled = true
Reproductions = MyTalkthroughHeadphones
ReverberationTime = 0.71
RoomVolume = 200
RoomSurfaceArea = 88
MaxReverbFilterLengthSamples = 88200
PositionThreshold = 1.0
AngleThresholdDegree = 30
SoundPowerCorrectionFactor = 0.05
TimeSlotResolution = 0.005
MaxReflectionDensity = 12000.0
ScatteringCoefficient = 0.1</pre></code>
										</p>
										
526
										<h5>Binaural room acoustics renderer (class <code>BinauralRoomAcoustics</code>) example</h5>
527
										<p>
fpa's avatar
fpa committed
528
										Requires the Room Acoustics for Virtual ENvironments (RAVEN) software module (see <a href="research.html">Research section</a>) or other room acoustics simulation backends. Note that the reverberation time may exceed the reverberation filter length (divided by the sampling rate) with the consequence that the generated impulse response will be cropped. This renderer requires and uses the specified sound receiver HRIR data set for spatialization and applies a sound power correction to match with direct sound energy if combined with binaural free field renderer.
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
<pre><code>[Renderer:MyBinauralRoomAcoustics]
Class = BinauralRoomAcoustics
Enabled = true
Description = Renderer with room acoustics simulation backend (RAVEN) for a source-receiver-pair with geometry-aware propagation
Reproductions = MyTalkthroughHeadphones
# Setup options: Local, Remote, Hybrid
Setup = Local
ServerIP = PC-SEACEN
HybridLocalTasks = DS
HybridRemoteTasks = ER_IS, DD_RT
RavenDataBasePath = $(raven_data)
# Task processing (Timeout = with desired update rate, for resource efficient processing; EventSync = process on request (for sporadic updates); Continuous = update as often as possible, for standalone server)
TaskProcessing = Timeout
# Desired update rates in Hz, may lead to resource issues
UpdateRateDS = 12.0
UpdateRateER = 6.0
UpdateRateDD = 1.0
MaxReverbFilterLengthSamples = 88200
DirectSoundPowerCorrectionFactor = 0.3</pre></code>
										</p>
										
										
										<h5>Prototype free field renderer (class <code>PrototypeFreeField</code>) example</h5>
										
										<p>
fpa's avatar
fpa committed
554
										Similar to binaural free field renderer with the capability of handling multi-channel receiver directivities. This renderer can, for example, be used for recording the output of microphone array simulations.
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
<pre><code>[Renderer:MyPrototypeFreeField]
Class = PrototypeFreeField
Enabled = true
Reproductions = MyTalkthroughHeadphones
MotionModelNumHistoryKeys = 10000
MotionModelWindowSize = 0.2
MotionModelWindowDelay = 0.1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceivers = false
MotionModelLogEstimatedOutputReceivers = false
SwitchingAlgorithm = linear</pre></code>
										</p>
										
										<h5>Prototype generic path renderer (class <code>PrototypeGenericPath</code>) example</h5>
										
fpa's avatar
fpa committed
571
										<p>Channel count and length can be specified arbitrarily but is limited by the computational power available. Filtering is done individually for each source-receiver pair.
572
573
574
575
576
577
578
579
580
581
582
583
<pre><code>[Renderer:MyPrototypeGenericPath]
Class = PrototypeGenericPath
Enabled = true
Reproductions = MyTalkthroughHeadphones
NumChannels = 2
IRFilterLengthSamples = 88200
IRFilterDelaySamples = 0
OutputMonitoring = true</pre></code>
										</p>
										
										<h5>Binaural air traffic noise renderer (class <code>BinauralAirTrafficNoise</code>) example</h5>
										
fpa's avatar
fpa committed
584
										<p>Filtering is done individually for each source-receiver pair. Involved filters the simulation of propagation paths can also be exchanged by the user for prototyping (requires a modification of simulation flags in the configuration file).
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
<pre><code>
[Renderer:MyAirTrafficNoiseRenderer]
Class = BinauralAirTrafficNoise
Enabled = true
Reproductions = MyTalkthroughHeadphones
MotionModelNumHistoryKeys = 1000
MotionModelWindowSize = 2
MotionModelWindowDelay = 1
MotionModelLogInputSources = false
MotionModelLogEstimatedOutputSources = false
MotionModelLogInputReceivers = false
MotionModelLogEstimatedOutputReceivers = false
GroundPlanePosition = 0.0
PropagationDelayExternalSimulation = false
GroundReflectionExternalSimulation = false
DirectivityExternalSimulation = false
AirAbsorptionExternalSimulation = false
SpreadingLossExternalSimulation = false
TemporalVariationsExternalSimulation = false
SwitchingAlgorithm = cubicspline</pre></code>
										</p>
										
										<h5>Dummy renderer (class <code>PrototypeDummy</code>) example</h5>
										
fpa's avatar
fpa committed
609
										<p>Useful for a quick configuration of your own prototype renderer.
610
611
612
613
614
615
616
617
618
619
<pre><code>[Renderer:MyDummyRenderer]
class = PrototypeDummy
Description = Dummy renderer for testing, benchmarking and building upon
Enabled = true
OutputGroup = MyDesktopHP
Reproductions = MyTalkthroughHeadphones</pre></code>
										</p>
								

				
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
620
										<h5>Other rendering module examples</h5>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
621
										<p>
fpa's avatar
fpa committed
622
										Every specific rendering module has its own specific set of parameters. The discussion of every functional detail is out of scope of this introduction. As all configurations are parsed in the constructor of the respective module, their functionality can sometimes only be fully understood by investigating the source code. For facilitation, the Redstart GUI application includes dialogs to create and interact with those renderers, additionally offering information when hovering over the GUI elements.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
623
624
625
										</p>
										
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
626
627
										
										<h4>Reproduction module configuration</h4>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
628
										<p>
fpa's avatar
fpa committed
629
										To instantiate a reproduction module, a section with a <code>Reproduction:</code> suffix has to be included. The statement following <code>:</code> will be the unique identifier of this reproduction instance. If you want to change parameters during execution, this identifier is required to call the instance. All reproduction modules require some obligatory definitions but for every specific parameter set, a detailed description is necessary. For typical reproduction modules, some examples are given below.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
630
631
632
633
634
635
										</p>
										
										<h5>Required reproduction module parameters</h5>
										<p>
<pre><code>Class = REPRODUCTION_CLASS
Outputs = OUTPUT_GROUP(S)</pre></code>
fpa's avatar
fpa committed
636
637
The reproduction class refers to the type of reproduction as provided in the section <a href="overview.html#reproduction">overview</a>.<br />
The parameter <code>Outputs</code> describes the connections to logical output groups that forward audio based on the configured channels. At least one output group has to be defined but the reproduction stream can also be connected to multiple outputs of same or different type (e.g., different pairs of headphones). The only restriction is that the reproduction channel number has to match with the channel count of the output group(s).
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
638
639
640
641
642
643
644
645

										</p>
										<h5>Optional reproduction module parameters</h5>
										<p>
<pre><code>Description = Some informative description of this reproduction module instance
Enabled = true
InputDetectorEnabled = false
RecordInputEnabled = false
646
RecordInputFilePath = MyReproInput_filename_may_including_$(ProjectName)_macro.wav
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
647
648
OutputDetectorEnabled = false
RecordOutputEnabled = false
649
RecordOutputFilePath = MyReproOutput_filename_may_including_$(ProjectName)_macro.wav
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
650
</pre></code>
fpa's avatar
fpa committed
651
Reproduction modules can be <i>enabled and disabled</i> to speed up setup changes without copy & pasting larger parts of a configuration section as especially output groups can only be instantiated if the sound card provides enough channels. This makes testing on a desktop and switching to a lab environment easier.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
652
<br />
fpa's avatar
fpa committed
653
For reproduction modules, the <i>input and output</i> can be observed. A stream detector on input and output can be activated that will produce level meter values, to be used in a GUI widget or so. The input of a reproduction module may include several superposed rendering streams (in constrast to the rendering output), for example, for direct sound and reverberant sound. The output of a reproduction can also be recorded and exported to a WAV file. The recording starts at initialization and is exported to hard drive after finalization implicating that data is kept in the RAM. If a lot of channel numbers are required and/or long recording sessions are planned it is recommended to route the output through a DAW using, for example, ASIO re-routing software devices like Reapers ReaRoute ASIO driver. Macros are useful to include a more versatile output file name.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
654
655
656
										</p>
										
										
657
										<h5>Talkthrough reproduction (class <code>Talkthrough</code>) example</h5>										
fpa's avatar
fpa committed
658
										<p>The following example with all available key/value configuration pairs is taken from the default <code>VACore.ini</code> settings which is generated from the repository's <code>VACore.ini.proto</code> (by CMake). It requires an output called <code>MyDesktopHP</code>.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
659
660
661
<pre><code>[Reproduction:MyTalkthroughHeadphones]
Class = Talkthrough
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
662
Description = Generic talkthrough to output group
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
663
664
665
666
667
668
669
670
671
672
Outputs = MyDesktopHP
InputDetectorEnabled = false
OutputDetectorEnabled = false
RecordInputEnabled = false
RecordInputFilePath = $(ProjectName)_Reproduction_MyTalkthroughHeadphones_Input.wav
RecordOutputEnabled = false
RecordOutputFilePath = $(ProjectName)_Reproduction_MyTalkthroughHeadphones_Output.wav</pre></code>

										</p>
										
fpa's avatar
fpa committed
673
<h5>Low-frequency / subwoofer mixing reproduction (class <code>LowFrequencyMixer</code>) example</h5>										
674
675
676
<pre><code>[Reproduction:MySubwooferMixer]
Class = LowFrequencyMixer 
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
677
Description = Generic low frequency (subwoofer) loudspeaker mixer
678
679
680
681
Outputs = Cave_SW
MixingChannels = ALL # Can also be a single channel, e.g. zero order of Ambisonics stream</code></pre></p>

<h5>Equalized headphones reproduction (class <code>Headphones</code>) example</h5>
fpa's avatar
fpa committed
682
<p>Two-channel equalization using FIR filtering based on post-processed inverse headphone impulse responses measured through in-ear microphones.
683
684
<pre><code>[Reproduction:MyHD600]
Class = Headphones
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
685
Description = Equalized Sennheiser HD600 headphones
686
687
688
689
690
691
692
693
694
Enabled = true
# Headphone impulse response inverse file path (can be normalized, but gain must then be applied for calibration)
HpIRInvFile = HD600_all_eq_128_stereo.wav
HpIRInvFilterLength = 22050 # optional, can also be obtained from IR filter length
# Headphone impulse response inverse gain for calibration ( HpIR * HpIRInv == 0dB )
HpIRInvCalibrationGainDecibel = 0.1
Outputs = MyHD600HP</code></pre></p>
										
<h5>Multi-channel cross-talk cancellation reproduction (class <code>NCTC</code>) example</h5>
fpa's avatar
fpa committed
695
<p>Requires an output called <code>MyDesktopLS</code>. In case of a dynamic NCTC reproduction, only one receiver can be tracked (indicated by <code>TrackedListenerID</code> which is orientated and located based on a </i>real-world pose</i>). <code>DelaySamples</code> shifts the final CTC filters to obtain causal filters. The amount of the delay has to be set reasonably regarding <code>CTCFilterLength</code> (e.g., apply a shift of half the filter length). 
696
697
698
<pre><code>[Reproduction:MyNCTC]
Class = NCTC
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
699
Description = Crosstalk cancellation for N loudspeaker
700
701
702
703
Outputs = MyDesktopLS
TrackedListenerID = 1
# algorithm: reg|...
Algorithm = reg
704
RegularizationBeta = 0.001
705
DelaySamples = 2048
706
707
CrossTalkCancellationFactor = 1.0
WaveIncidenceAngleCompensationFactor = 1.0
708
709
710
711
712
UseTrackedListenerHRIR = false
CTCDefaultHRIR = $(DefaultHRIR)
Optimization = OPTIMIZATION_NONE</pre></code></p>
										
<h5>Higher-order Ambisonics decoding (class <code>HOA</code>) example</h5>
fpa's avatar
fpa committed
713
<p>Creates a decoding matrix based on a given output configuration, but can only be used for one output.
714
715
716
<pre><code>[Reproduction:MyAmbisonics]
Class = HOA
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
717
Description = Higher-Order Ambisonics
718
719
TruncationOrder = 3
Algorithm = HOA
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
720
721
Outputs = VRLab_Horizontal_LS
ReproductionCenterPos = AUTO # or x,y,z</pre></code></p>
722
723
724


<h5>Ambisonics binaural mixdown (class <code>AmbisonicsBinauralMixdown</code>) example</h5>
fpa's avatar
fpa committed
725
<p>Encodes the individual orientations of loudspeakers in a loudspeaker setup using binaural technology based on the <code>VirtualOutput</code> group. It can also be used for a virtual Ambisonics downmix with ideal spatial sampling layout.
726
727
728
<pre><code>[Reproduction:AmbisonicsBinauralMixdown]
Class = AmbisonicsBinauralMixdown
Enabled = true
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
729
Description = Binaural mixdown of virtual loudspeaker setup using HRIR techniques
730
731
732
733
734
735
TruncationOrder = 3
Outputs = MyDesktopHP
VirtualOutput = MyDesktopLS
TrackedListenerID = 1
HRIRFilterLength = 128</pre></code></p>

Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
736
737
738
										
										<h5>Other reproduction module examples</h5>
										<p>
fpa's avatar
fpa committed
739
										Every specific reproduction module has its own specific set of parameters. The discussion of every functional detail is out of scope of this introduction. As all configurations are parsed in the constructor of the respective module, their functionality can sometimes only be fully understood by investigating the source code. For facilitation, the Redstart GUI application includes dialogs to create and interact with those renderers, additionally offering information when hovering over the GUI elements.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
740
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
741
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
742
743
									</section>
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
744
									<hr />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
745
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
746
									<section id="control">
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
747
748
749
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
750
										<h3>Controlling a Virtual Acoustics instance</h3>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
751
										<p>
fpa's avatar
fpa committed
752
										Once your VA application is running as configured, you eventually want to create a virtual scene and modify its entities. Scene control is possible via scripts and tracking devices (e.g, NaturalPoint's OptiTrack). The VA interface provides a list of methods which lets you trigger updates and control settings.<br />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
753
754
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
755
										<h4>Control VA using Matlab</h4>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
756
										<p>
fpa's avatar
fpa committed
757
758
										The most common way to control VA for prototyping, testing, and in the scope of listening experiments is by using <b>MathWorks' Matlab</b>. VA provides a Matlab binding and a convenience class called <code>itaVA</code>. Once initialized, the class object can be connected to the VA server application over a TCP/IP network connection (or the local network port), as already described in the <a href="overview.html">overview section on controlling VA</a>.<br />
										You can find the <code>itaVA.m</code> Matlab class along with the required files for communication with VA in the <a href="download.html">VA package under the <code>matlab</code> folder</a>. In case you are building and deploying <code>VAMatlab</code> on your own (for your platform), or if it is missing, look out for <code>build_itaVA*.m</code> scripts that will generate the convenience class around the <code>VAMatlab</code> executable. Adding this folder to the Matlab path list, will enable permanent access from the console, independently of the current working directory.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
759
										<br />
fpa's avatar
fpa committed
760
										To get started, inspect the example files and use Matlab's bash completion on an instance of the <code>itaVA</code> class to receive self explanatory functions, i.e., when executing
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
761
										<pre><code>va = itaVA</code></pre>
fpa's avatar
fpa committed
762
										The list of available methods is sorted by getter and setter nomenclature (<code>va.get_*</code> and <code>va.set_*</code>), followed by the entity (<code>sound_receiver</code>, <code>sound_source</code>, <code>sound_portal</code>), and the actual action. To create an entity, directivities and more, use the <code>va.create_*</code> methods.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
763
764
765
766
										<br />
										<br />
										
										<blockquote>
fpa's avatar
fpa committed
767
										Note: All example calls to control VA are shown in <b>Matlab code style</b>. The naming convention in other scripting languages, however, is very similar. C++ and C# methods use capitalized words without underscores.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
768
										</blockquote>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
769
770
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
771
										<h4>Control VA using Python</h4>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
772
										<p>
fpa's avatar
fpa committed
773
										A Python VA module is available facilitating network access. It can be installed to run from anywhere, or it can be placed and imported from a local folder. To obtain the package and example scripts, look out for the <a href="download.html">download a package that includes the Python binding</a>. It's only available for Python 3.6 and recent compilers.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
774
775
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
776
										<h4>Control VA using Unity</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
777
										<p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
778
										A more intuitive and playful way to use VA is with Unity, a 3D and scripting development environment for games and Virtual Reality applications. The <code>VAUnity</code> C# scripts extend Unity <code>GameObject</code> and communicates properties to a VA server. Therefore, a C# VA binding is required, which comes with the <a href="download.html">binary packages in the download section</a>. No knowledge of a scripting or programming language is required with this method, just a copy of Unity. How to use VA and Unity is described <a href="https://git.rwth-aachen.de/ita/VAUnity">in the README of the project repository</a>.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
779
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
780
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
781
782
										<p>&nbsp;</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
										<h4>Global gain and muting</h4>
										<p>
										To control the global input gains (sound card software input channels), use
										<pre><code>va.set_input_gain( 1.0 ) # for values between 0 and 1</code></pre>
										</p>
										<p>
										To mute the input, use
										<pre><code>va.set_input_muted( true ) # or false to unmute</code></pre>
										</p>
										<p>
										The same works for the global output (sound card software output channels)
										<pre><code>va.set_output_gain( 1.0 )<br />va.set_output_muted( true ) # or false to unmute</code></pre>
										</p>
										
										<h4>Global auralization mode</h4>
										<p>
										The auralization mode is combined in the renderers as a logical AND of global auralization mode, sound receiver auralization mode and sound source auralization mode. Therefore, if an acoustic phenomenon is deactivated globally, it will affect all rendered sound paths.
										
										<pre><code>va.set_global_auralization_mode( '-DS' ) # ... to disable direct sound<br />va.set_global_auralization_mode( '+SL' ) # ... to enable spreading loss, e.g. 1/r distance law<br /></code></pre>
										
										Find the appropriate identifier for every auralization mode in the <a href="overview.html">overview table</a>.
										</p>
										
										<h4>Log level</h4>
										<p>The VA log level at server side can be changed using										
										<pre><code>va.set_log_level( 3 ) # 0 = quiet; 1 = errors; 2 = warnings (default); 3 = info; 4 = verbose; 5 = trace;</code></pre>
										This is helpful to detect problems if the current log level is temporarily not sufficient.
										</p>
										
										<h4>Search paths</h4>
										<p>At runtime, search paths can be added to the VA server using							
										<pre><code>va.add_search_path( 'D:/your/data/' )</code></pre>
										Note, that the search path has to be available at server side, if you are not running VA on the same machine. Wherever possible, add search paths and use file names only. Never use absolute paths for input files. If your server is not running on the same machine, consider adding search paths via the configuration at startup.
										</p>
										
										<h4>List modules, renderers and reproductions</h4>
										<p>To retrieve information on the available modules, use				
										<pre><code>modules = va.get_modules()</code></pre>
										It will return any registered VA module, including all renderers and reproductions as well as the core itself.
										</p>
										<p>
										All modules can be called using 			
										<pre><code>out_args = va.call_module( 'module_id', in_args )</code></pre>
										where <code>in_args</code> and <code>out_arg</code> are structs with magic parameters that depend on the module you are calling.
										Usually, a key with <code>help</code> or <code>info</code> returns useful information on how to work with the respective module.										
										</p>
										<p>
										To work with renderers, use
										<pre><code>renderers = va.get_renderers()<br />params = va.get_renderer_parameters( 'renderer_id' )<br />va.set_renderer_parameters( 'renderer_id', params )</code></pre>
										Again, all parameters are structs and a parameter set with <code>help</code> or <code>info</code> key may return usage information. Good practice is to use the parameter getter and inspect the key/value pars. Then, modify as desired and re-set the module with the new parameters.
										</p>
										<p>
										For reproductions, use
										<pre><code>reproductions = va.get_reproductions()<br />params = va.get_reproduction_parameters( 'reproduction_id' )<br />va.set_reproduction_parameters( 'reproduction_id', params )</code></pre>
										Same here, works like the renderer.
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
840
									</section>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
841
									
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
842
									<hr />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
843
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
844
									<section id="scene_handling">
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
845
846
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
847
										<h3>How to create and update a scene in Virtual Acoustics</h3>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
848
										<p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
849
										In VA, everything that is not static is considered part of a dynamic <i>scene</i>. All sound sources, sound portals, sound receivers, geometry and directivities are potentially dynamic and therefore are stored and accessed with a history concept. During lifetime, they can be modified. Renderers are picking up modifications and react upon the new state, e.g. when a sound source is moved or a sound receiver is rotated.<br />
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
850
										Updates are triggered asynchronously by the user or by another application and can also be synchronized, e.g. to assure that all signals are started within one audio frame.										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
851
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
852
853
										
										<h4>Sound sources</h4>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
854
855
856
857
858
859
860
861
862
										<p>Sound sources can be created and can be freely moved. To create a source, use										
										<pre><code>S = va.create_sound_source()</code></pre>
										</p>
										<p>
										or optionally pass a name										
										<pre><code>S = va.create_sound_source( 'Car' )</code></pre>
										<code>S</code> will hold a numerical identifier that is required to modify the sound source.
										</p>
										
863
864
										<blockquote>A sound source (as well as a sound receiver) can only be auralized if it has been placed somewhere. Otherwise it remains in an invalid state.</blockquote>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
865
866
										<p>Set a position as a 3-dimensional vector ...
										<pre><code>va.set_sound_source_position( S, [ x y z ] )</code></pre>
867
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
										</p>
										
										<p>... or orientation as 4-dimensional quaternion
										<pre><code>va.set_sound_source_orientation( S, [ x y z w ] )</code></pre>
										</p>
										
										<p>Also, you can set both values at once using the pose (position and orientation)
										<pre><code>va.set_sound_source_pose( S, [ x y z ], [ x y z w ] )</code></pre>
										</p>
										
										<p>You may also use a special view-and-up vector orientation, where the default view vector is towards negative Z direction and the up vector is towards positive Y direction according to the OpenGL convention.
										<pre><code>va.set_sound_source_orientation_view_up( S, [ vx vy vz ], [ ux uy uz ] )</code></pre>
										</p>
										
										<p>The corresponding getter functions are
										<pre><code>[ x y z ] = va.get_sound_source_position( S )
[ x y z w ] = va.get_sound_source_orientation( S )
[ p, q ] = va.get_sound_source_pose( S )
[ v, u ] = va.get_sound_source_orientation_view_up( S )
</code></pre>
										</p>
										
										<p>To get or set the name, use
										<pre><code>va.set_sound_source_name( S, 'AnotherCar' )
sound_source_name = va.get_sound_source_name( S )
</code></pre>
										</p>
										
										<p>Special (magic) parameter structs can be set or retrieved. They depend on special features and are used for prototyping, e.g. if sound sources require additional values for new renderers.
										<pre><code>va.set_sound_source_parameters( S, params )
params = va.get_sound_source_parameters( S )
</code></pre>
										</p>
										
										<p>The auralization mode can be modified and returned using
										<pre><code>va.set_sound_source_auralization_mode( S, '+DS' )
am = va.get_sound_source_auralization_mode( S )
</code></pre>
906
907
908
909
910
911
912
This call would activate the direct sound. Other variants are
										<pre><code>va.set_sound_source_auralization_mode( S, '-DS' )
va.set_sound_source_auralization_mode( S, 'DS, IS, DD' )
va.set_sound_source_auralization_mode( S, 'ALL' )
va.set_sound_source_auralization_mode( S, 'NONE' )
va.set_sound_source_auralization_mode( S, '' )
</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
										</p>
										
										<p>Sound sources can be assigned a directivity using a numerical identifier with
										<pre><code>va.set_sound_source_directivity( S, D )
D = va.get_sound_source_directivity( S )
</code></pre>
The handling of directivities is described below in the input data section.
										</p>
										
										<p>To mute (true) and unmute (false) a source, type
										<pre><code>va.set_sound_source_muted( S, true )
mute_state = va.get_sound_source_muted( S )
</code></pre>
										</p>
										
										<p>To control the loudness of a sound source, assign the sound power in Watts
										<pre><code>va.set_sound_source_sound_power( S, P )
P = va.get_sound_source_sound_power( S )
</code></pre>
fpa's avatar
fpa committed
932
The default value of <b>31.67mW (104.99 dB SWL re 10e-12 Watts)</b> corresponds to <b>1 Pascal (94.0 dB SPL re 20e-6 Pascal ) at 1.0m distance</b> for spherical spreading. The final gain of a sound source is linked to the input signal, which is explained below. However, a <b>digital signal with an RMS value of 1.0</b> (e.g., a sine wave with peak value of \sqrt(2)) will retain 94 dB SPL @ 1m. A directivity may alter this value for a certain direction, but a calibrated directivity will not change the overall excited sound power of the sound source when integrating over a hull.
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
933
934
935
936
937
938
										</p>
										
										<p>A list of all available sound sources returns the function
										<pre><code>source_ids = va.get_sound_source_ids()</code></pre>
										</p>
										
939
940
941
										<p>Sound sources can be deleted with
										<pre><code>va.delete_sound_source()</code></pre>
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
942
										
943
944
										<p>In contrast to all other sound objects, sound sources can be assigned a <b>signal source</b>. It feeds the sound pressure time series for that source and is referred to as the <i>signal</i> (speech, music, sounds). See below for more information on signal sources. The combination with the sound power and the directivity (if assigned), the signal source influences the time-dependent sound emitted from the source. For a calibrated auralization, the combination of the three components have to match physically.
										<pre><code>va.set_sound_source_signal_source( sound_source_id, signal_source_id )</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
945
946
947
										</p>
										
										
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
<h4>Sound receivers</h4>
<p>Except for the sound power method and the signal source adapter, all sound source methods are equally valid for sound receivers (see above). Just substitute <code>source</code> with <code>receiver</code>. A receiver can also be a human listener, in which case the <i>receiver directivity</i> will be an <b>HRTF</b>.
<br />
<br />
The VA interfaces provides some special features for receivers that are meaningful only in binaural technology. The head-above-torso orientation (HATO) of a human listener can be set and received as quaternion by the methods
<pre><code>va.set_sound_receiver_head_above_torso_orientation( sound_receiver_id, [ x y z w ] )
q = va.get_sound_receiver_head_above_torso_orientation( sound_receiver_id )</code></pre>

In common datasets like the <b>FABIAN HRTF</b> dataset (can be obtained from the <a href="http://www.opendaff.org" target="_blank">OpenDAFF project website</a>) only a certain range within the horizontal plane (around positive Y axis according to Cartesian OpenGL coordinates) is present, that accounts for simplified head rotations with a fixed torso. Many listening experiments are conducted in a fixed seat and the user's head orientation is tracking. Here, a HATO HRTF appears more suitable, at least if an artificial head is used.
<br /><br />
Additionally, in Virtual Reality applications with loudspeaker-based setups, user motion is typically tracked inside a specific area. Some reproduction systems require knowledge on the exact position of the user's head and torso to apply adaptive sweet spot handling (like cross-talk cancellation). The VA interface therefore includes some receiver-oriented methods that extend the virtual pose with a so called real-world pose. Hardware in a lab and the user's absolute position and orientation (pose) should be set using one of
<pre><code>va.set_sound_receiver_real_world_pose( sound_receiver_id, [ x y z ], [ x y z w ] )
va.set_sound_receiver_real_world_position_orientation_vu( sound_receiver_id, [ x y z ], [ vx vy vz ], [ ux uy uz ] )
</code></pre>
Getters are
<pre><code>[ p, q ] = va.get_sound_receiver_real_world_pose( sound_receiver_id )
[ p, v, u ] = va.get_sound_receiver_real_world_position_orientation_vu( sound_receiver_id )
</code></pre>
Also, HATO is supported (in case a future reproduction module makes use of HATO HRTFs)
<pre><code>va.set_sound_receiver_real_world_head_above_torso_orientation( sound_receiver_id, [ x y z w ] )
q = va.set_sound_receiver_real_world_head_above_torso_orientation( sound_receiver_id )
</code></pre>
</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
971
										
972
																				
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
973
										<h4>Sound portals</h4>
974
										<p>Sound portals have been added to the interface for future use and are currently not supported by the available renderer. Their main purpose will include building acoustics applications, where portals are combined to form flanking transmissions through walls and ducts.</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
975
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
976
																				
977
978
979
980
981
982
983
984
985
986
987
<h4>Signal sources</h4>
<p>
Sound signals or signal sources represent the sound pressure time series that are emitted by a source.<br />
Some are <i>unmanaged</i> and are directly available, others have to be created. To get a list with detailed information on currently available signal sources (including those created at runtime), type
</p>
<pre><code>va.get_signal_source_infos()</code></pre>

<p>
In general, a <i>signal source</i> is attached to one ore many <i>sound sources</i> like this:</p>										
<pre><code>va.set_sound_source_signal_source( sound_source_id, signal_source_id )</code></pre>

Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
988
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
989
990
										<h5>Buffer signal source</h5>
										<p>Audio files that can be attached to sound sources are usually single channel anechoic WAV files. In VA, an audio clip can be loaded as a <b>buffer signal source</b> with special control mechanisms. It supports macros and uses the search paths to locate a file. Using relative paths is highly recommended. Here two examples
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
										<pre><code>signal_source_id = va.create_signal_source_buffer_from_file( 'filename.wav' )<br >demo_signal_source_id = va.create_signal_source_buffer_from_file( '$(DemoSound)' )</code></pre>
										The <code>DemoSound</code> macro points to the 'welcome to va' anechoic recording file in WAV format, which resides in the common <code>data</code> folder. Make sure that the VA application can find the common <code>data</code> folder, which is also added as an include path in default configurations.
										<br /><br />
										Now, the signal source can be attached to a sound source using
										<pre><code>va.set_sound_source_signal_source( sound_source_id, signal_source_id )</code></pre>
										Any buffer signal source can be started, stopped and paused. Also, it can be set to looping or non-looping mode (default).										
										<pre><code>va.set_signal_source_buffer_playback_action( signal_source_id, 'play' )
va.set_signal_source_buffer_playback_action( signal_source_id, 'pause' )
va.set_signal_source_buffer_playback_action( signal_source_id, 'stop' )
va.set_signal_source_buffer_looping( signal_source_id, true )
</code></pre>

										To receive the current state of the buffer signal source, use
										<pre><code>playback_state = va.get_signal_source_buffer_playback_state( signal_source_id )</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1005
										</p>
1006
										
1007
1008
1009
1010
1011
1012
1013
1014
<h5>Input device signal sources</h5>
<p>
Input channels from the sound card can be directly used as signal sources (microphones, electrical instruments, etc). They are <i>unmanaged</i> (can not be created or deleted) and all channels are made available individually on startup. They appear in the list of signal source and can be set by
</p>
<pre><code>va.set_sound_source_signal_source( sound_source_id, 'inputdevice1' )</code></pre>
<p>
for the first channel, and so on.
</p>
1015
1016
										
										<!--
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1017
										<h5>Machine signal source</h5>
1018
1019
										<p>
										In VA, a machine signal source is an assembly of three audio clips that are stored in a buffer with usage that is similar to a single looping audio file. The main difference is, that a machine signal source comprises a start, an idle and a stop sequence. A machine will start with the start sound, move over to the idle sound and will cross-fade to the stop sound on request. It can also be tuned by setting a playback speed (or RPM in case of a rotatory engine) if required.
1020
1021
										<br /><br />
										@todo
1022
										</p>
1023
										-->
1024
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1025
										<h5>Text-to-speech (TTS) signal source</h5>
1026
										<p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1027
										The TTS signal source allows to generate speech from text input. Because it uses the commercial <i>CereProc's CereVoice</i> third party library, it is not included in the VA package for public download. However, if you have access to the <i>CereVoice</i> library and can build VA with TTS support, this is how it works in <code>Matlab</code>:
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
<pre><code>tts_signal_source = va.create_signal_source_text_to_speech( 'Heathers beautiful voice' )
tts_in = struct();
tts_in.voice = 'Heather';
tts_in.id = 'id_welcome_to_va';
tts_in.prepare_text = 'welcome to virtual acoustics';
tts_in.direct_playback = true;
va.set_signal_source_parameters( tts_signal_source, tts_in )
</code></pre>
Do not forget that a signal source can only be auralized in combination with a sound source. For more information, see the <a href="https://git.rwth-aachen.de/ita/toolbox/blob/master/applications/VirtualAcoustics/VA/itaVA_example_text_to_speech.m">text-to-speech example</a> in the <a href="http://www.ita-toolbox.org" target="_blank">ITA-Toolbox for Matlab</a>.
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1039
										<h5>Other signal sources</h5>
1040
1041
1042
										<p>VA also provides specialized signal sources, which can not be covered in detail here. Please inspect the source code for usage.</p>
										
										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1043
1044
1045
1046
										<h4>Scenes</h4>
										<p>Scenes are a prototype-like definition to allow renderers to act differently depending on the requested scene identifier. This is useful when implementing different behaviour based on a user-triggered scene that should be loaded, e.g. a room acoustic situation or a city soundscape. Most renderers will ignore these calls, but for instance the room acoustics renderer uses this concept as long as direct geometry handling is not fully implemented.</p>
										
								
1047
1048
										<h4>Directivities (including HRTFs)</h4>
										<p>
1049
										Sound source and sound receiver directivities are usually made available as a file resource including multiple directions on a sphere for far field usage. VA currently supports the OpenDAFF format with time domain and magnitude spectrum content type. They can be loaded with
1050
1051
1052
1053
1054
1055
										<pre><code>directivity_id = va.create_directivity_from_file( 'my_individual_hrtf.daff' )</code></pre>
										
										VA ships with the <a href="http://www.akustik.rwth-aachen.de/go/id/pein/lidx/1" target="_blank">ITA artificial head HRTF dataset</a> (actually, the DAFF export is an HRIR in time domain), which is available under Creative Commons license for academic use.<br />
										The default configuration files and Redstart sessions include this HRTF dataset as <code>DefaultHRIR</code> macro, and it can be created using
										<pre><code>directivity_id = va.create_directivity_from_file( '$(DefaultHRIR)' )</code></pre>
										Make sure that the VA application can find the common <code>data</code> folder, which is also added as an include path in default configurations.										
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1056
1057
1058
1059
1060
										<br /><br />
										Directivities can be assigned to a source or receiver with
										<pre><code>va.set_sound_source_directivity( sound_source_id, directivity_id )<br />va.set_sound_receiver_directivity( sound_source_id, directivity_id )</code></pre>
										</p>
										
1061
<h4>Homogeneous medium</h4>
1062
<p>VA provides support for rudimentary homogeneous medium parameters that can be set by the user. The data is accessed by rendering and reproduction modules (mostly to receive the sound speed value for delay calculation). Values are always in SI units (meters, seconds, etc). Additionally, a user defined set of parameters is provided, in case a prototyping renderer requires further specialized medium information (may also be used for non-homogeneous definitions). Here is the overview of setters and getters:
1063
</p>
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089

<p>Speed of sound in m/s
<pre><code>va.set_homogeneous_medium_sound_speed( 343.0 )
sound_speed = va.get_homogeneous_medium_sound_speed()</code></pre></p>

<p>Temperature in degree Celsius
<pre><code>va.set_homogeneous_medium_temperature( 20.0 )
temperature = va.get_homogeneous_medium_temperature()</code></pre></p>

<p>Static pressure in Pascal, defaults to the norm atmosphere
<pre><code>va.set_homogeneous_medium_static_pressure( 101325.0 )
static_pressure = va.get_homogeneous_medium_static_pressure()</code></pre></p>

<p>Relative humidity in percentage (ranging from 0.0 to 100.0 or above)
<pre><code>va.set_homogeneous_medium_relative_humidity( 75.0 )
humidity = va.get_homogeneous_medium_relative_humidity()</code></pre></p>

<p>Medium shift / 3D wind speed in m/s
<pre><code>va.set_homogeneous_medium_shift_speed( [ x y z ] )
shift_speed = va.get_homogeneous_medium_relative_humidity()</code></pre></p>

<p>Prototyping parameters (user-defined struct)
<pre><code>va.set_homogeneous_medium_parameters( medium_params )
medium_params = va.get_homogeneous_medium_relative_humidity()</code></pre></p>

</br>
1090
1091
1092
1093
1094
1095
1096
1097
										
										
										<h4>Geometry</h4>
										<p>Geometry interface calls are for future use and are currently not supported by the available renderer. The concept behind geometry handling is real-time environment manipulation for indoor and outdoor scenarios using VR technology like Unity or plugin adapters from CAD modelling applications like SketchUp.</p>
										
										<h4>Acoustic materials</h4>
										<p>Acoustic material interface calls are for future use and are currently not supported by any available renderer. Materials are closely connected to geometry, as a geometrical surface can be linked to acoustic properties represented by the material.</p>
										
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
<h4>Solving synchronisation issues</h4>
<p>
Scripting languages like Matlab have a problematic nature when it comes to timing: evaluation duration scatters unpredictably and timers are not precise enough. This becomes a major issue when, for example, a continuous motion of a sound source should be performed with a clean Doppler shift. A simple loop with a timeout will result in audible motion jitter as the timing for each loop body execution is significantly diverging. Also, if a music band should start playing at the same time and the start is executed by subsequent scripting lines, it is very likely that they end up out of sync.
</p>

<h5>High performance timout</h5>
<p>
To avoid timing problems, the VA Matlab binding provides a high performance timer that is implemented in C++. It should be used wherever a synchronous update is required, mostly for moving sound sources or sound receivers. An example for a properly synchronized update loop at 60 Hertz that incrementally drives a source from the origin into positive X direction until it is 100 meters away:
</p>
<pre><code>S = va.create_sound_source()

va.set_timer( 1 / 60 )
x = 0
while( x < 100 )
	va.wait_for_timer;
	va.set_sound_source_position( S, [ x 0 0 ] )
	x = x + 0.01
end

va.delete_sound_source( S )
</code></pre>


<h5>Synchronising multiple updates</h5>
<p>
VA can execute updates synchronously in the granularity of the block rate of the audio stream process. Every scene update will be kept back until the update is unlocked. This feature is mainly used for simultaneous playback start.
<pre><code>va.lock_update
va.set_signal_source_buffer_playback_action( drums, 'play' )
va.set_signal_source_buffer_playback_action( keys, 'play' )
va.set_signal_source_buffer_playback_action( base, 'play' )
va.set_signal_source_buffer_playback_action( sax, 'play' )
va.set_signal_source_buffer_playback_action( vocals, 'play' )
va.unlock_update
</code></pre>

<p>It may also be used for uniform movement of spatially static sound sources (like a vehicle with 4 wheels). However, locking updates will inevitably lock out other clients (like trackers) and should be released as soon as possible.
</p>
<pre><code>va.lock_update
va.set_sound_source_position( wheel1, p1 )
va.set_sound_source_position( wheel2, p2 )
va.set_sound_source_position( wheel3, p3 )
va.set_sound_source_position( wheel4, p4 )
va.unlock_update
</code></pre>


Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
									</section>		

<hr />									
									<section id="inputdata">
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
									<!--
										<h3>Directivities</h3>
										<p>
										Generally, input data from files or other resources should be kept to a minimum in VR systems, because they inherently result in limited interactivity. However, it is very common that data is loaded from the hard drive. This input data is usually a product of costly simulations (like FEM/BEM), design procedures (like CAD models), post-processed measurement data like HRTFs/directivities as well as directional or anechoic recordings of speech, music and other sounds.
1155
1156
1157
1158
1159
1160
										</p>
										
										<h4>Geometries, acoustic materials and others</h4>
										<p>
										Further input files are currently not supported by the core of VA (they are not handled automatically). However, any required input file can be forwarded to special renderers via the VA interface by using the prototype methods (with the help of </code>VAStruct</code> containers). For example, the room acoustics renderer uses the scene to load a file and the property getter/setter to control details of the room acoustics simulation. The artificial reverb renderer uses the prototype setter to modify room parameters required to evaluate Sabine's formula. The upside of this design is, that tryouts can be implemented quickly and tested via any interface to VA without API change. The downside is, that the parameter assembly has to be known by the user (usually the developer). Withouth documentation, it is required to browse the C++ code of the renderer to interpret the required naming convention in order to modify settings using structs.
										</p>
Dipl.-Ing. Jonas Stienen's avatar
WIP    
Dipl.-Ing. Jonas Stienen committed
1161
									-->
1162
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1163
1164
1165
									</section>
									
									<section id="rendering">
1166
1167
1168
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
1169
1170
1171
1172
1173
1174
1175
1176
1177
<h3>Audio rendering</h3>
<p>
Audio rendering is, next to reproduction, the heart of VA. Rendering instances combine the user information to audible sound, each implementation in a unique way and with a dedicated purpose. Audio renderers are informed by the VA core about scene changes (asynchronous updates), that are triggered by the user, and the task of the rendering instance is to adapt the requested changes as fast as possible.
<br /><br />
Rendering modules work pretty much on their own, however they expose some common and some specialized methods to interact with.

</p>
<p>
To get a list of available modules, use
1178
<pre><code>renderer_ids = va.get_rendering_modules()</code></pre>
1179
</p>
1180
										
1181
<p>Every rendering instance can be muted/unmuted and the output gain can be controlled.
1182
1183
1184
1185
1186
1187
1188
1189
1190
<pre><code>va.set_rendering_module_muted( renderer_id, true )
va.set_rendering_module_gain( renderer_id, 1.0 )
mute_state = va.get_rendering_module_muted( renderer_id )
gain = va.get_rendering_module_gain( renderer_id )</code></pre>
</p>								
	
<p>Renderers may also be masked by auralization modes. To enable or disable certain fields, use for example
<pre><code>va.set_rendering_module_auralization_mode( renderer_id, '-DS' )
va.set_rendering_module_auralization_mode( renderer_id, '+DS' )</code></pre>
1191
1192
1193
</p>

<p>To obtain and set parameters, type
1194
1195
<pre><code>va.set_rendering_module_parameters( renderer_id, in_params )
out_params = va.get_rendering_module_parameters( renderer_id, request_params )</code></pre>
1196
1197
1198
1199
The <code>request_params</code> can usually be empty, but if a key <code>help</code> or <code>info</code> is present, the rendering module will provide usage information.
</p>

<p>A special feature that has been requested for Virtual Reality (background music, instructional speech, operator's voice) provides a sound source and sound receiver create method that will only be effective for the given <i>rendering instance</i> (explicit renderer). This is required if ambient clips should be played back without spatialization, or if certain circumstances demand that a source is only processed by one single renderer. This way also computational power can be saved.
1200
1201
<pre><code>sound_source_id = va.create_sound_source_explicit_renderer( renderer_id, 'HitButtonEffect' )
sound_receiver_id = va.create_sound_receiver_explicit_renderer( renderer_id, 'SurveillanceCamMic' )</code></pre>
1202
1203
1204
1205
1206
The sources and receivers created with this method are handled like normal entities, but are only effective for the explicit rendering instance.
</p>

<h4>Binaural free field renderer</h4>
<p>
1207
1208
1209
For synchronization of this renderer with other renderers, a static delay can be set that is added to the propagation delay simulation.
It is set by a special parameter using a struct, in the below example it is set to 100ms.
<pre><code>in_struct = struct()
1210
in_struct.AdditionalStaticDelaySeconds = 0.100
1211
va.set_rendering_module_parameters( renderer_id, in_struct )</code></pre>
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
</p>

<p>
Special features of this renderer include individualized HRIRs. The anthropometric parameters are derived from a specific key/value layout of the receiver parameters combined under the key <code>anthroparams</code>. The units are meters.
<pre><code>in_struct = struct()
in_struct.anthroparams = struct()
in_struct.anthroparams.headwidth = 0.12
in_struct.anthroparams.headheight = 0.10
in_struct.anthroparams.headdepth = 0.15
va.set_sound_receiver_parameters( sound_receiver_id, in_struct )</code></pre>
</p>

<p>
The current anthropometric parameters can be obtained by
<pre><code>params = va.get_sound_receiver_parameters( sound_receiver_id, struct() )
disp( params.anthroparams )</code></pre>
1228
1229
</p>

1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243

<h4>Prototype generic path renderer</h4>
<p>
This renderer can update impulse responses through the VA interface and will exchanged incoming data in real-time for a requested source-receiver-pair. It is a powerful prototyping tool that gives instant audible results for A/B comparisons. At ITA, it is used to create a binaural (tow-channel) FIR filtering based renderer within Matlab as part of the laboratory course on Acoustic Virtual Reality.<br />
In the examples below, the propagation path from source 1 to receiver 1 is updated. If no verbose output is required, just drop the verbose key.
</p>

<p>
To trigger an update from a file resource, a specialized struct has to be created:
<pre><code>in_struct = struct()
in_struct.receiver = 1
in_struct.source = 1
in_struct.verbose = 1
in_struct.filepath = CologneDomeAmbisonicsIRMeasurement.wav
1244
va.set_rendering_module_parameters( renderer_id, in_struct )</code></pre>
1245
1246
</p>
<p>
1247
If a certain channel should be updated (say channel 3), add
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
<pre><code>in_struct.channel = 3</code></pre>
</p>
<p>
To trigger an update by sending impulse response samples directly (here two channels, but also more channels possible), compile another specialized struct like this
<pre><code>in_struct = struct()
in_struct.receiver = 1
in_struct.source = 1
in_struct.verbose = 1
in_struct.ch1 = [ 1 0 0 0 ... ]
in_struct.ch2 = [ 0 0 1 0 ... ]
1258
va.set_rendering_module_parameters( renderer_id, in_struct )</code></pre>
1259
1260
1261
1262
1263
1264
</p>
<p>
This example will exchange a non-delayed Dirac impulse for the first channel and a Dirac with 2 samples delay on the second channel. Of course, an entire measured or simulated impulse response will be used for a common application.
</p>


Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1265
1266
									</section>
									
1267
1268
<hr />
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1269
									<section id="reproduction">
1270
1271
1272
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1273
1274
										<h3>Audio reproduction</h3>
										<p>
1275
1276
1277
1278
1279
1280
1281
Audio reproduction receive spatialized audio streams from audio renderers. Most of them work independently from user input, but some require knowledge on the user's pose in the reproduction area.
<br /><br />
Rendering modules work pretty much on their own, however they expose some common and some specialized methods to interact with.

</p>
<p>
To get a list of available modules, use
1282
<pre><code>reproduction_ids = va.get_reproduction_modules()</code></pre>
1283
1284
1285
</p>
							
<p>Every reproduction instance can be muted/unmuted and the output gain can be controlled.
1286
1287
1288
1289
<pre><code>va.set_reproduction_module_muted( reproduction_id, true )
va.set_reproduction_module_gain( reproduction_id, 1.0 )
gain = mute_state = va.get_reproduction_module_muted( reproduction_id )
va.get_reproduction_module_gain( reproduction_id )</code></pre>
1290
1291
1292
</p>

<p>To obtain and set parameters, type
1293
1294
<pre><code>va.set_reproduction_module_parameters( reproduction_id, in_params )
out_params = va.get_reproduction_module_parameters( reproduction_id, request_params )</code></pre>
1295
1296
1297
1298
1299
1300
The <code>request_params</code> can usually be empty, but if a key <code>help</code> or <code>info</code> is present, the reproduction module will provide usage information.
</p>

<h4>Multi-channel cross-talk cancellation</h4>
<p>
The N-CTC reproduction requires exact knowledge on the user's ear positions. Therefore, it can only be used for a single sound receiver and the module evaluates the real-world pose that can be set by the interface call (or by tracking as described below).<br />
1301
Some additional parameters can be modified during real-time processing for immediate evaluation. The additional delay shifts the resulting CTC filters to create causality, unit is seconds. The CTC factor and WICK factor control the smoothing of the initial HRTF in order to gain better transmission quality and a wider sweet spot while trading signal-to-noise ratio of the binaural performance. Brought down to zero, the N-CTC module acts like a multi-channel transaural stereo reproduction with simple panning and constant group delay.
1302
<pre><code>in_params = struct()
1303
in_params.AdditionalDelayTime = 0.100
1304
1305
in_params.CrossTalkCancellationFactor = 1.0
in_params.WaveIncidenceAngleCompensation = 1.0
1306
va.set_reproduction_module_parameters( reproduction_id, in_params )</code></pre>
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
</p>

<h4>Headphones reproduction</h4>
<p>
During runtime, the inverting FIR filter for the headphone equalization can be exchanged. This is helpful to investigate the effect of the equalization performance by direct comparison. To maintain 0dB playback in case the inverse FIR filter changes the signal's energy, the calibration gain factor can optionally be passed, either as factor or as decibel value.
</p>
<pre><code>in_params = struct()
in_params.HpIRInvFile = HD650_individualized_eq.wav
in_params.HPIRInvCalibrationGain = 1.0
in_params.HPIRInvCalibrationGainDecibel = 0.0
1317
va.set_reproduction_module_parameters( reproduction_id, in_params )</code></pre>
1318

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1319
1320
									</section>
									
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
									<section id="tracking">		

<h3>Tracking</h3>
<p>
VA does not internally support tracking, but tracking devices can be used to update VA entities. For external tracking, the <code>VAMatlab</code> project supports <b>NaturalPoint's OptiTrack</b> devices by connecting to a server instance. It can automatically forward rigid body poses to one sound receiver (head and torso separately) and one sound source. Another possibility is to use an HMD such as <b>Oculus Rift and HTC Vive</b> and update VA through <b>Unity</b>.
</p>

<h4>OptiTrack via VAMatlab</h4>
<p>
To connect an OptiTrack rigid body with a VA sound entity (here a receiver with id 1), use
</p>
<pre><code>va.set_tracked_sound_receiver( 1 )</code></pre>
<p>
To also include the real-world pose (as required by some reproduction modules like cross-talk cancellation), also run
</p>
<pre><code>va.set_tracked_real_world_sound_receiver( 1 )</code></pre>

<p>
If the rigid body index should be changed (e.g. to index 3 for head and 4 for torso), type
</p>
<pre><code>va.set_tracked_sound_receiver_head_rigid_body_index( 3 )
va.set_tracked_sound_receiver_torso_rigid_body_index( 4 )</code></pre>

<p>
The head rigid body (rb) can also be locally transformed using a translation and (quaternion) rotation method, e.g. if the rigid body barycenter is not between the ears or is rotated against the default orientation:
</p>
<pre><code>va.set_tracked_sound_receiver_head_rb_trans( [ x y z ] )
va.set_tracked_real_world_sound_receiver_head_rb_rotation( [x y z w ] )</code></pre>

<p>
For the real-world sound receiver, similar methods exist:
</p>
<pre><code>va.set_tracked_real_world_sound_receiver_head_rigid_body_index( 3 )
va.set_tracked_real_world_sound_receiver_torso_rigid_body_index( 4 )
va.set_tracked_real_world_sound_receiver_head_rb_trans( [ x y z ] )
va.set_tracked_real_world_sound_receiver_head_rb_rotation( [x y z w ] )</code></pre>

<p>
The sound source methods are almost equal, except <code>receiver</code> has to be substituted (here shown for a sound source with id 1 at rigid body index 5)
</p>
<pre><code>va.set_tracked_sound_source( 1 )
va.set_tracked_sound_source_rigid_body_index( 5 )
va.set_tracked_sound_source_rigid_body_translation( [ x y z ] )
va.set_tracked_sound_source_rigid_body_rotation( [x y z w ] )</code></pre>

<p>To finally connect to the tracker that is running on the same machine and pushes to <code>localhost</code> network loopback device, use</p>
<pre><code>va.connect_tracker</code></pre>
<p>In case tracker is running on another machine, OptiTrack requires to both set the remote (in this example 192.168.1.2) AND the client machine IP (in this example 192.168.1.143) like this</p>
<pre><code>va.connect_tracker( '192.169.1.2', '192.169.1.143' )</code></pre>


<h4>HMD via VAUnity</h4>
<p>
To connect an HMD, set up a Unity scene and connect the tracked GameObject (usually the MainCamera) with a VAUSoundReceiver instance. For further details, please read the README files of VAUnity.
</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1376
1377
									</section>
									
1378
1379
									<hr />
									
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1380
									<section id="simulation_recording">
1381
1382
1383
1384
1385
1386
1387
1388
1389
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
										
<h3>Simulation and recording</h3>
<p>
As already pointed out, VA can be used for simulations and recordings. The only requirement is to activate the recording by configuration before runtime, as described in the rendering and reproduction module setup sections. Outputs from the rendering modules can be used to store spatial audio samples (like binaural clips or Ambisonics B-format / HOA tracks). Outputs from reproductions can be used for offline playback with a given loudspeaker setup for (audio-visual) demonstrations or for non-interactive listening experiments.<br />
A current issue of VA's simulation and recording capability is, that it can only be driven by a phsyical sound card, thus only allowing to capture real-time rendering/reproduction. Therefore, capabilities are limited to the available resources. Simulations for many sources/receivers might have to be done subsequently (with potential syncing issues). We are planing to provide a virtual sound card that can slow down the rendering to a user-triggerd speed for offline rendering, hence even very complex scene can be handled in the future, just not in real-time.
</p>

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1390
									</section>
1391
1392

									<hr />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1393
1394
									
									<section id="examples">
1395
1396
									
										<p><!-- dummy spacer to unhide title on anchor access --><br /></p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1397
										
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
<h3>Examples</h3>
<p>
Here are some common use cases and the full description on how to set up a VA server and create a corresponding scene.
</p>

<h4>Binaural sound source circulating around a listener</h4>
<p>
Involved application: <b>Redstart</b><br />
Recommended playback device: <b>Headphones</b><br />
</p>

<blockquote>Shortcuts are indicated with brackets</blockquote>

<ul>
<li>Open up <i>Redstart</i> and create a binaural session (N, B). Leave everything to default.</li>
<li>Start the session (F5)</li>
<li>Now, open <i>Run > Circulating source (R, C)</i>, leave everything to default and hit the <i>Start</i> button</li>
<li>You should be listening to the welcome track circulating around your head <i>(through the default ITA artificial head HRTF)</i>.</li>
</ul>

<p>Now, try to change parameters and listen to the changes in the auralization. To test your own files, create a new binaural session and override the default macros.</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
										
									</section>

							</div>
						</div>
					</div>
				</div>

			<!-- Footer -->
				<footer id="footer" style="background-color:black">
					<ul class="icons">
						<li><a href="http://www.akustik.rwth-aachen.de" class="icon alt fa-globe"><span class="label">ITA website</span></a></li>
						<li><a href="http://blog.rwth-aachen.de/akustik/category/va" class="icon alt fa-comments-o"><span class="label">Akustik-Blog</span></a></li>
						<li><a href="http://git.rwth-aachen.de/ita" class="icon alt fa-github"><span class="label">ITA GitLab</span></a></li>
					</ul>
					
					<span class="image"><img src="images/rwth_ita_akustik_en_institute_weiss_rgb_blackbg_small.jpg" alt="Institute of Technical Acoustics (ITA), RWTH Aachen University" /></span>
					
1437
					<ul class="copyright">&copy; 2017-2018 Institute of Technical Acoustics (ITA), RWTH Aachen University</ul>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
				</footer>

		</div>

		<!-- Scripts -->
			<script src="assets/js/jquery.min.js"></script>
			<script src="assets/js/jquery.scrolly.min.js"></script>
			<script src="assets/js/jquery.dropotron.min.js"></script>
			<script src="assets/js/jquery.scrollex.min.js"></script>
			<script src="assets/js/skel.min.js"></script>
			<script src="assets/js/util.js"></script>
			<!--[if lte IE 8]><script src="assets/js/ie/respond.min.js"></script><![endif]-->
			<script src="assets/js/main.js"></script>

	</body>
</html>