Hey everyone, I've got a reply here for you from Daniel Murray, Audio Group
Programmer.
---------------------------------------------------
Hi there, I'd like to address a few of the points that have been raised in this thread. Firstly can I say how awesome it is to see backers passionately discussing audio (especially technical implementation details such as sound source localisation).
| Binaural HRTF implemented in the Cobra engine
For Elite the majority of our audio pipeline is supplied by a middleware package called Wwise (
https://www.audiokinetic.com/). Wwise is a fantastic tool and an important part of how we work on Elite. For things like sound localisation we are dependent on the functionality offered by Wwise. Fortunately Wwise does expose most of the capabilities offered by modern audio apis including binaural head related transfer function spatialisation. On Windows Wwise's lower level audio engine is partially implemented using Microsoft's DirectX component "DirectSound". DirectSound, like Open-AL
http://en.wikipedia.org/wiki/OpenAL (typically used on *nix/GNU systems where PortAudio
http://www.portaudio.com/ isn't sufficient) or even CoreAudio (Apple's system level audio api which uses OpenAL for 3D positioning), offers some degree of three dimensional localisation of sound (typically referred to as binaural lateralisation as the quality of the effect achieved by convolution with head related transfer functions tends to be most prominent on the listener's horizontal plane). You can see an overview of DirectSound's features here
http://msdn.microsoft.com/en-us/library/windows/desktop/ee418868%28v=vs.85%29.aspx and details about DirectSound 3D listeners here
http://msdn.microsoft.com/en-us/library/windows/desktop/ee416766%28v=vs.85%29.aspx.
| That means you could "calibrate" it to work for your own head shape, cranium volume, and ear shape by telling a program which direction you hear the sound in different iterations.
The idea of players customising the lower level operation of the audio pipeline to suit their personalised hearing equipment is very appealing. However until this functionality is properly supported by the various pieces of tech that power Wwise on every platform that they support it is unlikely that we will see this made available to mainstream gamers - let's hope so though!
| I really hope and pray that FD went with OpenAL and NOT Fmod for ED
I hope the above makes this clear but just in case; OpenAl is an application programming interface for reading and writing sound buffers that are potentially binaurally localised, fMod
http://en.wikipedia.org/wiki/FMOD (like Wwise) is an entire audio tool chain for game development that implements all of the functionality (and much, much more) offered by OpenAL and DirectSound (to name a couple) and that's ok because the scope of the projects are not really related (and like I mentioned above, in some cases you may even be listening to OpenAL when not playing on Windows).
All this being said the quality of the effect this process has is known to be hit-and-miss, the perceptual accuracy of the effect varies person to person (and audio engine to audio engine). That people are discussing "how great it would be to have" and "the possibility of having something" we technically already have speaks volumes about the state of this technology.
I hope this has been a helpful read it certainly sparked some interesting conversations and experimentation in the office today!
The Audio Team