overview.html 38.2 KB
Newer Older
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
<!DOCTYPE HTML>
<!--
	Landed by HTML5 UP
	html5up.net | @ajlkn
	Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
	<head>
		<title>Virtual Acoustics</title>
		<meta charset="utf-8" />
		<meta name="viewport" content="width=device-width, initial-scale=1" />
		<!--[if lte IE 8]><script src="assets/js/ie/html5shiv.js"></script><![endif]-->
		<link rel="stylesheet" href="assets/css/main.css" />
		<!--[if lte IE 9]><link rel="stylesheet" href="assets/css/ie9.css" /><![endif]-->
		<!--[if lte IE 8]><link rel="stylesheet" href="assets/css/ie8.css" /><![endif]-->
	</head>
	<body>
		<div id="page-wrapper">

			<!-- Header -->
				<header id="header">
22
					<h1 id="logo"><a href="index.html">Home</a></h1>
23 24
					<nav id="nav">
						<ul>
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
							<li><a href="overview.html">Overview</a></li>
							<li><a href="download.html">Download</a></li>
							<li><a href="documentation.html">Documentation</a>
								<ul>
									<li><a href="documentation.html#configuration">Configuration</a></li>
									<li><a href="documentation.html#control">Control</a></li>
									<li><a href="documentation.html#scene_handling">Scene handling</a></li>
									<li><a href="documentation.html#rendering">Audio rendering</a></li>
									<li><a href="documentation.html#reproduction">Audio reproduction</a></li>
									<li><a href="documentation.html#tracking">Tracking</a></li>
									<li><a href="documentation.html#simulation_recording">Simulation and recording</a></li>
									<li><a href="documentation.html#examples">Examples</a></li>
								</ul>
							</li>
							<li>
								<a href="support.html">Support</a>
								<ul>
									<li><a href="support.html#faq">FAQ</a></li>
									<li><a href="support.html#issue_tracker">Issue tracker</a></li>
									<li><a href="support.html#community">Community</a></li>
									<li><a href="support.html#nosupport">No support</a></li>
								</ul>
							</li>
							<li>
								<a href="developers.html">Developers</a>
								<ul>
									<li><a href="developers.html#api">C++ API</a></li>
									<li><a href="developers.html#dependencies">Dependencies</a></li>
									<li><a href="developers.html#configuration">Configuration</a></li>
									<li><a href="developers.html#build_guide">Build guide</a></li>
									<li><a href="developers.html#repositories">Repositories</a></li>
								</ul>
							</li>
58
							<li>
59
								<a href="research.html">Research</a>
60
								<ul>
61 62 63
									<li><a href="research.html#system">System papers</a></li>
									<li><a href="research.html#technology">Technology papers</a></li>
									<li><a href="research.html#applied">Applied papers</a></li>
64 65 66 67 68 69
								</ul>
							</li>
							<li><a href="legal.html">Legal notice</a></li>
						</ul>
					</nav>
				</header>
70
					
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
71 72 73 74 75 76 77


			<!-- Main -->
				<div id="main" class="wrapper style1">
					<div class="container">
						<header class="major">
							<h2>Overview</h2>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
78
							<p>Virtual Acoustics explained in a few words <br /> <span style="font-size: 0.6em">This content is available under <a href="http://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></span> </p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
79 80 81 82 83 84 85
						</header>
								<!-- Content -->
									<section id="rtauralization">
									
										<!--<a href="#" class="image fit"><img src="images/pic05.jpg" alt="" /></a>-->
										<h3>Virtual Acoustics is a real-time auralization framework</h3>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
86 87
										<h4>Responsive auralization</h4>
										<p>VA creates audible sound from a purely virtual situation. To do so, it uses digital input data that is pre-recorded, measured, modelled or simulated. However, VA creates dynamic auditory worlds that can be interactively endevoured, because it accounts for modifications of the virtual situation. In the simplest case, this means that sound sources and listeners can move freely, and sound is changed accordingly. This real-time auralization approach can only be achieved, if certain parts of the audio processing are updated continuously and fast. We call this audio rendering, and the output is an audio stream that represents the virtual situation. For more complex situations like rooms or outdoor worlds, the sound propagation becomes highly relevant and very complex. VA uses real-time simulation backends or simplified models to create a physics-based auditory impression.<br />
Dipl.-Ing. Jonas Stienen's avatar
Typos  
Dipl.-Ing. Jonas Stienen committed
88
										If update rates are undergoing certain perceptive thresholds, this method can be readily used in Virtual Reality applications.
89 90 91 92

<h4>Low latency, efficient real-time processing and flexible resource management</h4>
<p>Input-output latency is crucial for any interactive application. VA tries to achieve minimal latency wherever possible, because latency of subsequent components add up. As long as latency is kept low, a human listener will not notice small delays during scene updates, resulting in a convincing live system, where interaction directly leads to the expected effect (without waiting for the system to process).<br />
VA supports real-time capability by establishing flexible data management and processing modules that are lightweight and handle updates efficiently. For example, the FIR filtering modules use a partitioned block convolution resulting in update latencies (at least for the early part of filters) of a single audio block - which usually means a couple of milliseconds. Remotely updating long room impulse responses using Matlab can easily hit 1000 Hz update rates, which under normal circumstances is about three times more a block-based streaming sound card provides - and by far more a dedicated graphics rendering processor achieves, which is often the driving part of scene modifications.<br />
93
However, this comes at a price: VA is not trading computational resources over update rates. Advantage is taken by improvement in the general purpose processing power available at present as well as more efficient software libraries. Limitations are solely imposed by the provided processing capacity, and not by the framework. Therefore, VA will plainly result in audio dropouts or complete silence, if the computational power is not sufficient for rendering and reproducing the given scene with the configuration used. Simply put, if you request too much, VA will stop auralizing correctly. Usually, the number of paths between a sound source and a sound receiver that can effectively be processed can be reduced to an amount where the system can operate in real-time. For example, a single binaural free field rendering can calculate roughly up to 20 paths in real-time on a modern PC, but for room acoustics with long reverberation times, a maximum of 6 sources and one listener is realistic (plus the necessity to simulate the sound propagation filters remotely). If reproduction of the rendered audio stream also requires intensive processing power, the numbers go further down.
94
</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
95 96
										
										<h4>Why is VA a framework?</h4>
97
										<p>You can <a href="download.html">download a ready-to-use VA application</a> and <a href="documentation.html">individually configure it</a> to reach your target. The combinations of available rendering modules are diverse and therefore VA is suitable for various purposes. The more simple modules provide free-field spatial processing (e.g. using Binaural Technology) for precise localization. More sophisticated modules create certain moods by applying directional artificial reverberation. And others try to be as precise as possible applying physics-based sound propagation simulation for indoor and outdoor scenarios. And there are also possibilities to simply mix ambient sounds that guide or entertain. <br />
98
										To deliver your sound to a human listener, you can use different reproduction modules. The selection process depends on the available hardware and the rendering type, and also the computational power you can afford. Find below the tables indicating the <b>rendering and reproduction modules</b> shipped with VA. <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
99
										If what you want to do is not reflected by the available modules, you can also extend VA with your own module implementation. You can use generic calls to configure your components without modifying any interface and binding library, which is very helpful for prototyping. <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
100 101 102 103
										</p>
										
										<h4>Audio rendering in VA</h4>
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
104
										The current version of VA provides rendering modules as depicted in the table below. In VA, you can instantiate as many rendering modules as you require - including multiple instances of the same class. This makes sense for example, if you want to use different configurations and evaluate the result by switching between renderings in the fraction of a second. <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
105 106
										Rendering modules are connected to reproduction modules, and one renderer can also feed multiple reproductions.<br />
										However, there are limits in number of instances by the computational power available.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
107 108 109 110 111
										
								<div class="table-wrapper">
									<table class="alt">
										<thead>
											<tr>
112 113
												<th width="16%">Class name</th>
												<th width="16%">Output stream</th>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
114 115 116 117 118 119 120
												<th>Description</th>
											</tr>
										</thead>
										<tbody>
											<tr>
												<td>BinauralFreeField</td>
												<td>binaural 2-channel</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
121
												<td>A binaural free field rendering that omits any present geometry. Uses FIR filtering for HRTFs / HRIRs, variable delay lines and filterbanks for directivities per source-receiver-pair.</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
122 123 124 125 126 127 128 129 130
											</tr>
											<tr>
												<td>BinauralArtificialReverb</td>
												<td>binaural 2-channel</td>
												<td>Mixes reverberation at receiver side using reverberation time, room volume and surface area with a binaural approach, applies effect using FIR filtering</td>
											</tr>
											<tr>
												<td>BinauralRoomAcoustics</td>
												<td>binaural 2-channel</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
131
												<td>Uses a simulation scheduler backend for binaural room impulse responses and applies effect by efficient convolution of long FIR filters per source-receiver-pair [uses RAVEN, which is not free yet]</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
132
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
WIP  
Dipl.-Ing. Jonas Stienen committed
133 134 135
											<tr>
												<td>BinauralOutdoorNoise</td>
												<td>binaural 2-channel</td>
136
												<td>Uses OutdoorNoise base renderer and processes incidence waves using binaural technology for spatialization. [BETA]</td>
Dipl.-Ing. Jonas Stienen's avatar
WIP  
Dipl.-Ing. Jonas Stienen committed
137
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
138 139 140 141 142
											<tr>
												<td>BinauralAirTrafficNoise</td>
												<td>binaural 2-channel</td>
												<td>See binaural free field renderer, but adds a ground reflection and temporal variation of medium dynamics.</td>
											</tr>
143 144 145 146 147
											<tr>
												<td>AmbisonicsFreeField</td>
												<td>configurable</td>
												<td>Generates panned signals based on spherical base functions according to higher order Ambisonics (HOA).</td>
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
148 149 150
											<tr>
												<td>VBAPFreeField</td>
												<td>variable</td>
151
												<td>Generates panned channel-based sound depending on output loudspeaker setup based on Vector-Base Amplitude Panning. Omits any geometry.</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
152
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
153 154 155 156 157
											<tr>
												<td>PrototypeFreeField</td>
												<td>configurable</td>
												<td>A free field rendering that omits any present geometry. Uses variable delay lines for propagation, filterbanks for source directivities and FIR filtering of given channel number for multi-channel receiver directivities. Mainly for recording simulations of spatial microphone arrays, sound field microphones or Ambisonics microphones. </td>
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
158 159 160 161 162
											<tr>
												<td>PrototypeGenericPath</td>
												<td>configurable</td>
												<td>Concolves a long FIR filter efficiently for a configurable number of channels for each source-receiver-pair. FIR filter can be updated in real-time using the binding interface.</td>
											</tr>
163 164 165
											<tr>
												<td>AmbientMixer</td>
												<td>variable</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
166
												<td>Routes sound directly to all channels of reproductions and applies gains of sources</td>
167
											</tr>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183
											<tr>
												<td>PrototypeDummy</td>
												<td>unspecified</td>
												<td>Most simple dummy prototype renderer for developers to build upon.</td>
											</tr>
										</tbody>
										<tfoot>
											<tr>
												<td colspan="3">Table 1: currently available audio rendering module classes in VACore</td>
											</tr>
										</tfoot>
									</table>
								</div>
										</p>
										<h4>Audio reproduction in VA</h4>
										<p>
184 185
										The current version of VA provides reproduction modules as depicted in the table below. In VA, you can instantiate as many reproduction modules as you require - including multiple instances of the same class. This makes sense for example, if you want to use different configurations and evaluate the result by switching between reproductions in the fraction of a second.
										A rendering module can be fed by arbitrary numbers of rendering modules, but they have to be compatible concerning the streaming i/o schema. Also, a reproduction module can forward the final audio stream to any given number of outputs, if the physical channels are matching (e.g. 4 pairs of additional headphones). <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
186
										However, there are limits in number of instances by the computational power available.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
187 188 189 190 191
										
								<div class="table-wrapper">
									<table class="alt">
										<thead>
											<tr>
192 193 194
												<th width="16%">Class name</th>
												<th width="16%">Input stream</th>
												<th width="16%">Output stream</th>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
195 196 197 198 199 200 201 202
												<th>Description</th>
											</tr>
										</thead>
										<tbody>
											<tr>
												<td>Talkthrough</td>
												<td>channel-based stream</td>
												<td>variable</td>
203
												<td>Forwards the incoming stream directly to the audio hardware. Used a lot for plain headphone playback and channel-based renderings for loudspeaker setups.</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
204 205 206 207 208
											</tr>
											<tr>
												<td>Headphones</td>
												<td>any two-channel</td>
												<td>equalized two-channel</td>
209
												<td>Forwards the incoming stream after applying FIR deconvolution, for euqalization of headphones if HpTF is available.</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
210 211 212 213 214
											</tr>
											<tr>
												<td>LowFrequencyMixer</td>
												<td>arbitrary</td>
												<td>variable</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
215
												<td>Mixes all channels or routes a specified channel to a single subwoofer or a subwoofer array. Handy for simple LFE support.</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
216 217 218 219 220 221 222 223 224 225 226
											</tr>
											<tr>
												<td>NCTC</td>
												<td>binaural two-channel</td>
												<td>variable</td>
												<td>Uses static or dynamic binaural cross-talk cancellation for arbitrary number of loudspeakers.</td>
											</tr>
											<tr>
												<td>BinauralMixdown</td>
												<td>any channel-based</td>
												<td>binaural two-channel</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
227 228 229 230 231 232 233
												<td>Uses dynamic binaural technology with FIR filtering to simulate channel-based sound playback from a virtual loudspeaker setup.</td>
											</tr>
											<tr>
												<td>HOA</td>
												<td>Ambisonics any order</td>
												<td>variable</td>
												<td>Calculates and applies gains for a loudspeaker setup using Higher Order Ambisonics methods (HOA).</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
234 235 236 237 238
											</tr>
											<tr>
												<td>BinauralAmbisonicsMixdown</td>
												<td>Ambisonics any order</td>
												<td>binaural two-channel</td>
239
												<td>Calculates and applies gains for a loudspeaker setup using Higher Order Ambisonics methods, then spatialises the directions of the loudspeakers using HRTFs.</td>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
240 241 242 243 244 245 246 247 248
											</tr>
										</tbody>
										<tfoot>
											<tr>
												<td colspan="4">Table 2: currently available audio reproduction module classes in VACore</td>
											</tr>
										</tfoot>
									</table>
								</div>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
249
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
250 251
										<h4>Configuring VA</h4>
										<p>
252
										If you want to use VA, you most likely want to change the configuration to match your hardware and activate the rendering and reproduction modules you are interested in.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
253
										</p>
254
										<h5>Configuring VA in a VAServer application</h5>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
255
										<p>
256 257 258 259 260
										 <b>VAServer</b> can only start VA by providing a configuration file, usually called <code>VACore.ini</code>. You can configure VA for your purpose by modifying the <code>*.ini</code> files in the <code>conf</code> folder and use the provided batch start scripts, that will start the VA server using these configuration files. The <code>VACore.ini</code> controls the core parameters, the <code>VASetup.*.ini</code> are describing hardware devices and channel layouts. They are included by a line in the <code>[Files]</code> section of the configuration file and usually represent a static setup of a laboratory or a special setup of an experiment. Use <code>enabled = true</code> or <code>enabled = false</code> to activate or deactivate instantiation of sections, i.e. rendering or reproduction modules and output groups.
										</p>
										<h5>Configuring VA in a Redstart application</h5>
										<p>
										 <b>Redstart</b> offers basic GUI dialogs to create and control common configurations in so called sessions, but can also create sessions based on arbitrarily configured <coded>INI</code> files for special purposes. The audio settings and network server settings have extra inputs to provide rapid switching between sessions and audio hardware.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
261 262 263 264 265 266 267
										</p>
										<h5>Using search paths</h5>
										<p>
										Loading files from the hard drive seems a triviality, but in practice a lot of time is wasted due to paths that can not be found during runtime - especially if error messages do not indicate this problem. <br >
										In VA, we struggle a lot with this and it is a serious problem. Often, configurations and input data for scenes are created locally and are later transferred to a computer in the laboratory.
This computer is often not the computer that is also controlling the scene, because a remote network connection is used - which in consequence requires files to be mirrored on the hard drive of that server PC. If no precaution is taken, this usually leads to a nerve-wrecking trial-and-error process until all files are found - and mostly results in using absolute paths as the quick-and-dirty solution because we are all very lazy and to busy to do it right.
										<br />
268
										<b>DO IT RIGHT</b> in this context means, <b>NEVER use absolute paths</b> in the first place. VA provides search path functionality. This means, it will find any relative file path with the smallest amount of help: you have to provide one or many base paths where to look for your input files.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
269 270
										<br >
										<br />
271 272 273
										<blockquote>
										<b>Search path best practice:</b> <br /><br />
										Put all your input data in one base folder, let's say <code>C:/Users/student54/Documents/BachelorThesis/3AFCTest/InputData</code> In your <code>VACore.ini</code>, add a search path to this folder: <pre><code>[Paths]
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
274 275
studentlab_pc3_my_data = C:/Users/student54/Documents/BachelorThesis/3AFCTest/InputData</code></pre>

276
										Let us assume you have some subfolders <code>trial1, trial2, ...</code> with WAV files and a HRIR dataset <code>Kemar_individualized.v17.ir.daff</code> in the root folder. You will load them using this pseudo code <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
277
										
278
										<pre><code>HRIR_1 = va.CreateDirectivityFromFile( 'Kemar_individualized.v17.ir.daff' )
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
279 280 281 282
Sample_1_1 = va.CreateSignalSourceBufferFromFile( 'trial1/sample1.wav' )
Sample_1_2 = va.CreateSignalSourceBufferFromFile( 'trial1/sample2.wav' )
Sample_2_1 = va.CreateSignalSourceBufferFromFile( 'trial2/sample1.wav' )
...</code></pre>
283
										When you now move to another computer in the laboratory (for conducting the listening experiment there), copy the entire <code>InputData</code> folder to the computer, where the <u>VA server</u> will be running. For example to <code>D:/experiments/BA/student54/3AFCTest/InputData</code>.  Now, all you have to do is add another search path to your <code>VACore.ini</code> configuration file, e.g. <br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
284 285 286 287 288
										<pre><code>[Paths]
studentlab_pc3_my_data = C:/Users/student54/Documents/BachelorThesis/3AFCTest/InputData
hearingboth_pc_my_data = D:/experiments/BA/student54/3AFCTest/InputData</code></pre>

										... and you have no trouble with paths, anymore. If it is applicable, you can also add search paths over the VA interface during runtime using the <code>AddSearchPath</code> function.
289
										</blockquote>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
290
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
291 292
										
										<h4>Controlling VA</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
293
										<p>
294
										The first question is: what kind of software do you usually use? There are bindings that make VA interfaces available in <b>Matlab</b>, <b>Python</b>, <b>Lua</b> and rudimentary functionality in <b>C#</b>. While many in the acoustics research area prefer Matlab, it is Python and especially the combination with Jupyter notebook that the open-source community will find convenient to use with VA. C# is your choice if you are planning to use <a href="documentation.html#unity">VA for Unity environments</a>, which is probably the lightest entry for those who are not familiar with either Matlab or Python scripting.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
295
										<br /><br />
296
										Let's create a simple example scene for a binaural rendering. It requires a running VA server application on the same PC (if you have <a href="download.html">extracted the Windows binaries</a>, double-click on the <code>run_VAServer.bat</code> file in the root folder of VA).<br />
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
297
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
298
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
299 300
										<h5>Matlab</h5>
										<p>
301
										<pre><code>va = VA;
302 303 304 305 306
										
va.connect;
va.reset;

X = va.create_signal_source_buffer_from_file( '$(DemoSound)' );
307 308
va.set_signal_source_buffer_playback_action( X, 'play' );
va.set_signal_source_buffer_looping( X, true );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
309

310
S = va.create_sound_source( 'VA example sound source' );
311
va.set_sound_source_pose( S, [ -2 1.7 -2 ], [ 0 0 0 1 ] );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
312

313
va.set_sound_source_signal_source( S, X );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
314

315
H = va.create_directivity_from_file( '$(DefaultHRIR)' );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
316

317
L = va.create_sound_receiver( 'VA example listener' );
318
va.set_sound_receiver_pose( L, [ 0 1.7 0 ], [ 0 0 0 1 ] );
319

320
va.set_sound_receiver_directivity( L, H );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
321

322
va.disconnect;</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
323
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
324
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
325 326
										<h5>Python</h5>
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
327
										<pre><code>import va
328
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
329 330
va.connect
va.reset
331 332 333 334 335 336 337 338 339 340 341 342 343 344

signal_source_id = va.create_signal_source_buffer_from_file( '$(DemoSound)' )
va.set_signal_source_buffer_playback_action( signal_source_id, 'play' )
va.set_signal_source_buffer_looping( signal_source_id, true )

sound_source_id = va.create_sound_source( 'VAPy_Example_Source' )
va.set_sound_source_pose( sound_source_id, ( -2, 1.7, -2 ), ( 0, 0, 0, 1 ) 

va.set_sound_source_signal_source( sound_source_id, signal_source_id )

hrir = va.create_directivity_from_file( '$(DefaultHRIR)' )

sound_receiver_id = va.create_sound_receiver( 'VAPy_Example_Sound_Receiver' )
va.set_sound_receiver_pose( L, ( 0, 1.7, 0 ), ( 0, 0, 0, 1 ) )
345

346 347
va.set_sound_receiver_directivity( sound_receiver_id, hrir )

Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
348 349
va.disconnect
										</pre></code>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
350 351
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
352
										<h5>C#</h5>
353
										<pre><code>using VA;
354 355 356 357 358 359 360 361
namespace VA {
	class VAExample
	{
		static void Main(string[] args)
		{
            VAConnection = new VANet();
            VAConnection.Connect();
            VAConnection.Reset();
362

363 364 365
            string SignalSourceID = VAConnection.CreateSignalSourceBufferFromFile("$(DemoSound)");
            VAConnection.SetSignalSourceBufferPlaybackAction(SignalSourceID, "play");
            VAConnection.SetSignalSourceBufferIsLooping(SignalSourceID, true);
366

367 368
            int SoundSourceID = VAConnection.CreateSoundSource("C# example sound source");
            VAConnection.SetSoundSourcePose(SoundSourceID, new VAVec3(-2.0f, 1.7f, -2.0f), new VAQuat(0.0f, 0.0f, 0.0f, 1.0f));
369

370
            VAConnection.SetSoundSourceSignalSource(SoundSourceID, SignalSourceID);
371

372
            int HRIR = VAConnection.CreateDirectivityFromFile("$(DefaultHRIR)");
373

374 375
            int SoundReceiverID = VAConnection.CreateSoundReceiver("C# example sound receiver");
            VAConnection.SetSoundReceiverPose(SoundReceiverID, new VAVec3(0.0f, 1.7f, 0.0f), new VAQuat(0.0f, 0.0f, 0.0f, 1.0f));
376
			
377
            VAConnection.SetSoundReceiverDirectivity(SoundReceiverID, HRIR);
378

379
            // do something that suspends the program ...
380

381 382 383
            VAConnection.Disconnect();
		}
	}
384
}</code></pre>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
385 386 387

										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
388 389 390 391
										
										<h5>C++</h5>
										<pre><code>#include &lt;VA.h&gt;
#include &lt;VANet.h&gt;
392
#include &lt;string&gt;
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
393

394
int main( int, char** )
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
395
{
396 397
	IVANetClient* pVANet = IVANetClient::Create();
	pVANet-&gt;Initialize( "localhost" );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
398

399 400
	if( !pVANet-&gt;IsConnected() )
		return 255;
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
401

402
	IVAInterface* pVA = pVANet-&gt;GetCoreInstance();
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
403

404
	pVA-&gt;Reset();
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
405

406
	const std::string sSignalSourceID = pVA-&gt;CreateSignalSourceBufferFromFile( "$(DemoSound)" );
407 408
	pVA-&gt;SetSignalSourceBufferPlaybackAction( sSignalSourceID, IVAInterface::VA_PLAYBACK_ACTION_PLAY );
	pVA-&gt;SetSignalSourceBufferLooping( sSignalSourceID, true );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
409

410
	const int iSoundSourceID = pVA-&gt;CreateSoundSource( "C++ example sound source" );
411 412 413 414
	pVA-&gt;SetSoundSourcePose( iSoundSourceID, VAVec3( -2.0f, 1.7f, -2.0f ), VAQuat( 0.0f, 0.0f, 0.0f, 1.0f ) );

	pVA-&gt;SetSoundSourceSignalSource( iSoundSourceID, sSignalSourceID );

415
	const int iHRIR = pVA-&gt;CreateDirectivityFromFile( "$(DefaultHRIR)" );
416

417
	const int iSoundReceiverID = pVA-&gt;CreateSoundReceiver( "C++ example sound receiver" );
418
	pVA-&gt;SetSoundReceiverPose( iSoundReceiverID, VAVec3( 0.0f, 1.7f, 0.0f ), VAQuat( 0.0f, 0.0f, 0.0f, 1.0f ) );
419
	
420
	pVA-&gt;SetSoundReceiverDirectivity( iSoundReceiverID, iHRIR );
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
421 422 423

	// do something that suspends the program ...

424
	pVANet-&gt;Disconnect();
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
425
	delete pVANet;
426 427

	return 0;
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
428 429 430 431 432
}
</code></pre>

										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
433 434
										<h4>Sound sources, sound receivers and sound portals</h4>
										<p>
435 436 437 438
										In VA, you will find three different virtual entities that represent sound objects.<br />
										While the term <i>sound source</i> is self explanatory, VA uses the term <i>sound receiver</i> instead of listener.
										The reason is, that listeners would reduce the receiving entity to living creatures, while in VA those <i>listeners</i> can also be virtual microphones or have a completely different meaning in other contexts. <br />
										<i>Sound portals are</i> entities that pick up sound and transport, transform and/or propagate it to other portals or sound receivers. This concept is helpful for sound transmission handling in Geometrical Acoustics, for example if a door acts as a transmitting object between two rooms.<br />
439
										It depends on the rendering module you use, but portals are mostly relevant in combination with geometry, say for room acoustics.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
440 441 442 443
										</p>
										
										<h4>Auralization mode</h4>
										<p>
444
										Making acoustic effects audible is one of the central aspects of auralization. For research and demonstration purposes, it is helpful to switch certain acoustic phenomena on and off in a fraction of a second. This way, influences can be investigated intuitively.<br />
445
										VA provides a set of phenomena that can be toggled, and they are called auralization modes. Auralization modes can be controlled globally and for each sound sources and sound receiver individually. If a respective renderer considers the given auralization mode, the corresponding processing will be enabled or disabled based on the logical AND combination of the auralization modes (only if auralization modes of source, receiver AND global settings are positive, the phenomenon will be made audible).
446
										<br />
447
										Most of the auralization modes are only effective for certain rendering modules and are meaningless for others. For example a free field renderer will only expose direct sound, source directivity and doppler effect changes. All other phenomena are dismissed.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
448 449
										</p>
										
450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532
								<div class="table-wrapper">
									<table class="alt">
										<thead>
											<tr>
												<th width="16%">Name</th>
												<th width="8%">Acronym</th>
												<th>Description</th>
											</tr>
										</thead>
										<tbody>
											<tr>
												<td>Direct sound</td>
												<td>DS</td>
												<td>Direct sound path between a sound source and a sound receiver</td>
											</tr>
											<tr>
												<td>Early reflections</td>
												<td>ER</td>
												<td>Specular reflections off walls, that correspond to early arrival time of a complex source-receiver-pair.</td>
											</tr>
											<tr>
												<td>Diffuse decay</td>
												<td>DD</td>
												<td>Diffuse decay part of a the arrival time of a complex source-receiver-pair. Mostly used in the context of room acoustics.</td>
											</tr>
											<tr>
												<td>Source directivity</td>
												<td>SD</td>
												<td>Sound source directivity function, the angle dependent radiation pattern of an emitter.</td>
											</tr>
											<tr>
												<td>Medium absorption</td>
												<td>MA</td>
												<td>Acoustic energy attenuation due to absorbing capability of the medium</td>
											</tr>
											<tr>
												<td>Temporal variation</td>
												<td>TV</td>
												<td>Statistics-driven fluctuation of sound resulting from turbulence and time-variance of the medium (the atmosphere).</td>
											</tr>
											<tr>
												<td>Scattering</td>
												<td>SC</td>
												<td>Diffuse scattering off non-planar surfaces.</td>
											</tr>
											<tr>
												<td>Diffraction</td>
												<td>DF</td>
												<td>Diffraction off and around obstacles.</td>
											</tr>
											<tr>
												<td>Near field</td>
												<td>NF</td>
												<td>Acoustic phenomena caused by near field effects (in contrast to far field assumptions).</td>
											</tr>
											<tr>
												<td>Doppler</td>
												<td>DP</td>
												<td>Doppler frequency shifts based on relative distance changes.</td>
											</tr>
											<tr>
												<td>Spreading loss</td>
												<td>SL</td>
												<td>Distance dependend spreading loss, i.e. for spherical waves. Also called 1/r-law or (inverse) distance law.</td>
											</tr>
											<tr>
												<td>Transmission</td>
												<td>TR</td>
												<td>Transmission of sound energy through solid structures like walls and flanking paths.</td>
											</tr>
											<tr>
												<td>Absorption</td>
												<td>AB</td>
												<td>Sound absorption by material.</td>
											</tr>
										</tbody>
										<tfoot>
											<tr>
												<td colspan="3">Table 3: currently recognized auralization modes</td>
											</tr>
										</tfoot>
									</table>
								</div>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
533 534
										<h4>Signal sources</h4>
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
535 536 537
										VA differentiates between sound source and signal source. On the one hand we have a sound source, which is an acoustic entity that emits sound. On the other hand we speak of a <i>signal of a sound</i>, which represents the acoustic information emitted. Hence, a sound source is always connected with a signal source. For example, a piano is a sound source, the music played when using its keys is called the <i>source signal</i>, and can vary depending on the piece or interpretation of the artist. <br />
										VA provides a set of different signal source types. Most of the time sample buffers are used, that are populated with pre-recorded audio samples by loading WAV files from the hard drive. Those buffer signal sources can be started, paused and stopped, and they can be set into loop mode.<br />
										Apart from buffers, there is also the possibility to connect a microphone input channel from the audio device. More specialized signal sources are for example speech that is transcripted from text input, or machine signal sources with a start, idle and stop sound. Finally, you can connect your own implementation of a signal source by providing a network client that feeds audio samples, or register a signal source using the local interface directly (both in an experimental stage, though).
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
538 539 540 541
										</p>
										
										<h4>Directivities (including HRTFs and HRIRs)</h4>
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
542 543
										The sound radiation pattern of a <i>sound source</i> is usually described by a directional function that depends on wave length or frequency. This function is generally called <i>directivity</i> and is commonly used in the context of musical instruments. The underlying concept, however, can be diverse. Solutions range from simulated or measured data at sampled directions on a regular or irregular spherical grid all the way to sets of fundamental functions that are weighted by coefficients, such as spherical harmonics. VA supports individual implementations of those directivities, and it is up to the rendering modules to account for the different types (and for near field effects or distance dependencies).<br />
										To maintain a general approach to this topic, in VA, <i>sound receivers</i> can be assigned </i>directivities</i>, too. Due to the reciprocal nature of acoustic propagation and the fact that one can model sound transmission by means of linear shift-invariant systems for the majority of applications, this approach is equally valid for sound receivers. In the context of binaural technology, a sound receiver translates to a <i>listener</i> and the assigned directivity is called a <i>head-related transfer function</i> or <i>head-related impulse response</i>, depending on the domain of representation. The HRTF or HRIR is applied to the incoming sound at receiver in the same way a source directivity would be used.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
544 545 546 547
										</p>
										
										<h4>Geometry meshes and acoustic materials</h4>
										<p>
548
										Geometry-aware audio rendering is the holy grail of physics-based real-time auralization using geometrical acoustics simulation. It requires sophisticated algorithms and powerful backend processing to achieve real-time capability.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
549 550 551 552 553 554 555
										VA tries to support this by providing a simple geometry mesh class and interfaces to load and transmit geo data. However, it is up to the implementation of the rendering modules what to do with the data.
										Faces of meshes are assigned with acoustic materials such as absorption, scattering and transmission coefficients. These are, for example, used (or transformed and forwarded) by special rendering instances, like the binaural room acoustics audio renderer.
										</p>
										
										<h4>Scenes</h4>
										<p>
										A scene is a somewhat unspecified term in VA. Any assembled scene information is passed to the rendering modules, and it is up to the implementation what happens next.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
556
										The scene interface methods are for prototyping and future work, and can for example be used to define stratified medium definitions for air traffic noise rendering ... or for loading entire cities from a geo information server for a certain geo location that is intended to be used by a special rendering implementation.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
557 558
										</p>
										
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
559
										<h4>Active sound receiver concept</h4>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
560
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
561
										In VA, there is not one single sound receiver (or <i>listener</i>, if we speak of a human beeing). Instead, VA renders sound for all enabled sound receivers, and the actual output stream that is forwarded to the reproduction module(s) can be switched dynamically by configuring the </b>active sound receiver</b>.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
562 563 564 565 566 567
										This works either for all rendering instances, but can also be controlled for each rendering instance individually, i.e. to use VA for multiple listeners in one scene.
										</p>
										
										<h4>Real-world pose (tracking)</h4>
										<p>
										Sound receivers have a pose, a combination of a position and an orientation in 3D space. But they also have a pose for the <i>real-world</i>, meaning that a receiver can also be positioned in the reference frame of the real physical laboratory environment.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
568
										This is required for the processing of some reproduction modules, for example the binaural cross-talk cancellation reproduction <i>NCTC</i>, where the dynamic listener pose (this time it is a human beeing) has to be known very precisely.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
569 570 571 572
										</p>
										
										<h4>The VA struct class</h4>
										<p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
573
										VA uses a struct class, that acts like an associative container. Keys are described by strings, and values can be of basic data types like boolean, integer, floating point and strings. Also, more sophisticated VA types like samples are available, i.e. for audio data or impulse responses. Furthermore, structs can load other structs, which means that they can be nested to create well-structured formats.
574
										Structs behave very much like Matlab structs, Python dicts and the JSON format, and these objects can be forwarded over remote interfaces, for example to update an impulse reponse in an FIR convolution engine.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
575
										It is very convenient to use this concept for prototyping, and it allows to change parameters in almost every corner of the VA core by parameter setters and getters of modules that can be accessed using the module interface.
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
576
										</p>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
577 578 579 580 581 582 583 584 585 586 587 588 589 590 591
									</section>

					</div>
				</div>

			<!-- Footer -->
				<footer id="footer" style="background-color:black">
					<ul class="icons">
						<li><a href="http://www.akustik.rwth-aachen.de" class="icon alt fa-globe"><span class="label">ITA website</span></a></li>
						<li><a href="http://blog.rwth-aachen.de/akustik/category/va" class="icon alt fa-comments-o"><span class="label">Akustik-Blog</span></a></li>
						<li><a href="http://git.rwth-aachen.de/ita" class="icon alt fa-github"><span class="label">ITA GitLab</span></a></li>
					</ul>
					
					<span class="image"><img src="images/rwth_ita_akustik_en_institute_weiss_rgb_blackbg_small.jpg" alt="Institute of Technical Acoustics (ITA), RWTH Aachen University" /></span>
					
592
					<ul class="copyright">&copy; 2017-2019 Institute of Technical Acoustics (ITA), RWTH Aachen University</ul>
Dipl.-Ing. Jonas Stienen's avatar
Dipl.-Ing. Jonas Stienen committed
593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608
				</footer>

		</div>

		<!-- Scripts -->
			<script src="assets/js/jquery.min.js"></script>
			<script src="assets/js/jquery.scrolly.min.js"></script>
			<script src="assets/js/jquery.dropotron.min.js"></script>
			<script src="assets/js/jquery.scrollex.min.js"></script>
			<script src="assets/js/skel.min.js"></script>
			<script src="assets/js/util.js"></script>
			<!--[if lte IE 8]><script src="assets/js/ie/respond.min.js"></script><![endif]-->
			<script src="assets/js/main.js"></script>

	</body>
</html>