I might be in the minority, but I actually don't use the HCS voicepacks anymore after trying out ASTRA for a very short while
While I still appreciate the massive amount of effort they put into it and still use parts of the VA profile (mostly just keybinds and macros with all voice playback removed), the voice playback just feels like an elaborate charade
My problem is that you either have to work with a 'ship computer' / copilot that:
a) is extremely limited to their responses, despite being apparently intelligent/sapient in the responses it does have
b) has been modified and customized by yourself from the 'base' HCS version, meaning you already know every possible pre-programmed response, whether it's supposed to be intelligent/witty/snarky or just a standard "Yes captain, doing X" response
Right now, I'm just using Text to Speech voices (Filtered through EDDI to sound a little more robotic) with basic responses, but the TTS setup can result in a huge variation in them that would need to be 20+ EDDI sound files every response
eg. "Deploying Landing Gear" as a single response
TTS - "Deploying {Landing;} Gear;Gear {down;out;deployed}"
That one line results in the following responses, which one you get is randomized:
Deploying gear
Deploying landing gear
Gear down
Gear out
Gear deployed
This, along with the EDDI responses from jumping systems(Which are impossible to get in a non-TTS setting, as it would mean having billions of sound files), entering/exiting supercruise and docking at stations(after you give it a "docking complete" command, since it won't reliably know otherwise) gives the impression of a shipboard computer system that is obviously not intelligent, but also not limited to just a few phrases and has several variations in responses
Edit:
Just some additional clarification, but with just VA and a vanilla HCS voicepack, there is no way for the it to know what you are doing in-game, whether combat, docking, doing a system jump or even in what system you are in.
You could manually add your key bindings in VA to have it respond when you just press a key(ie. similar to the in-game ship computer voice), but without some advanced scripting, will be completely context-less(pressing gear up/down button while still docked would result in the voice playing, but nothing happening). With the advanced scripting it could work a lot better, but will still be far from perfect and may occasionally bug out, unlike the completely in-game voice
Adding a third party API plugin (EDDI, Ocellus, etc.) can also give you some additional info about what system you are in, some faction info and such, but you then are reliant on Elite's API server, which can be busy and doesn't always return with a response in a timely manner - not ideal for anything you might want in real time
While I still appreciate the massive amount of effort they put into it and still use parts of the VA profile (mostly just keybinds and macros with all voice playback removed), the voice playback just feels like an elaborate charade
My problem is that you either have to work with a 'ship computer' / copilot that:
a) is extremely limited to their responses, despite being apparently intelligent/sapient in the responses it does have
b) has been modified and customized by yourself from the 'base' HCS version, meaning you already know every possible pre-programmed response, whether it's supposed to be intelligent/witty/snarky or just a standard "Yes captain, doing X" response
Right now, I'm just using Text to Speech voices (Filtered through EDDI to sound a little more robotic) with basic responses, but the TTS setup can result in a huge variation in them that would need to be 20+ EDDI sound files every response
eg. "Deploying Landing Gear" as a single response
TTS - "Deploying {Landing;} Gear;Gear {down;out;deployed}"
That one line results in the following responses, which one you get is randomized:
Deploying gear
Deploying landing gear
Gear down
Gear out
Gear deployed
This, along with the EDDI responses from jumping systems(Which are impossible to get in a non-TTS setting, as it would mean having billions of sound files), entering/exiting supercruise and docking at stations(after you give it a "docking complete" command, since it won't reliably know otherwise) gives the impression of a shipboard computer system that is obviously not intelligent, but also not limited to just a few phrases and has several variations in responses
Edit:
Just some additional clarification, but with just VA and a vanilla HCS voicepack, there is no way for the it to know what you are doing in-game, whether combat, docking, doing a system jump or even in what system you are in.
You could manually add your key bindings in VA to have it respond when you just press a key(ie. similar to the in-game ship computer voice), but without some advanced scripting, will be completely context-less(pressing gear up/down button while still docked would result in the voice playing, but nothing happening). With the advanced scripting it could work a lot better, but will still be far from perfect and may occasionally bug out, unlike the completely in-game voice
Adding a third party API plugin (EDDI, Ocellus, etc.) can also give you some additional info about what system you are in, some faction info and such, but you then are reliant on Elite's API server, which can be busy and doesn't always return with a response in a timely manner - not ideal for anything you might want in real time
Last edited: