Release EDDI - Windows app for immersion and more

Status
Thread Closed: Not open for further replies.
Hi, firstly great bit of software sir!.
So I have been using the ((EDDI docking granted)) event to send a pad variable to VA that then displays an image of the correct location for stations.
This works perfectly apart from when landing at outposts or surface bases as the pad id will still display the corresponding station image, this is no big deal as I just swap in and out of profiles.
I did though notice that your aural pad prompt can differentiate between station/outposts ect.
So I guess my question is am I missing something or is ((EDDI docking granted)) generic to all station types?.
Thanks again for your work.

[video=youtube;a0c_f9Wp5yc]https://www.youtube.com/watch?v=a0c_f9Wp5yc&t=10s[/video]
 
So I guess my question is am I missing something or is ((EDDI docking granted)) generic to all station types?.

EDDI's speech responder has access to information beyond what is available in VoiceAttack. In this particular case EDDI can look up the station details for the provided station name (which ultimately comes from EDDB), uses that to obtain the station type and from there determines what it should say.

I am looking at finding ways to provide more information to VoiceAttack, but historically it's been difficult to do which is why I moved to a separate speech responder in the first place. Your tool is great; I'm not sure how much work might be involved in making an EDDI responder from it but I'm sure that people could come up with lots of uses to display generic text and/or images on-screen in response to events. If you're interested, and have a working knowledge of C#, feel free to fork the repo and give it a shot. If you use the speech responder as the basis it shouldn't be too hard to adapt the code.
 
Hi, firstly great bit of software sir!.
So I have been using the ((EDDI docking granted)) event to send a pad variable to VA that then displays an image of the correct location for stations.
This works perfectly apart from when landing at outposts or surface bases as the pad id will still display the corresponding station image, this is no big deal as I just swap in and out of profiles.
I did though notice that your aural pad prompt can differentiate between station/outposts ect.
So I guess my question is am I missing something or is ((EDDI docking granted)) generic to all station types?.
Thanks again for your work.

Most generally, jgm provides access to all the data in the associated Journal Log events, which consequently triggers the speech responder and VA events... station type is not included in the journal data.

To pull that bit of magic, jgm has a special, built-in speech responder function called 'StationDetails' which takes the station name (which the journal does provide) and makes a query to his EDDI server, which returns the station type (among other data), via a updated copy of the EDDN database. This data is only accessible within the speech responder.

EDIT: lol, jgm beat me to it!
 
Last edited:
Ok guys thanks for the responses, alas C# is a tad above my pay grade so think things can just stay as they are.
The main thing is that I was not overlooking anything - seems odd that station type is not in the journal though don`t you think?.
 
I just purchased the 'Jess' voice from Cereproc, installed it (SAPI 4.1 I think), and set it as the default Windows TTS voice. On some EDDI scripts, it uses the new Jess voice, but on any script that uses a 'Pause' script, and some others (seems to be ones that probably involve sation/system names) it defaults back to the old Microsoft TTS voice I was using. I saw in the 'General Troubleshooting' section over on the EDDI Github page that it will do that whenever the new voice can't deal with things phonetically. I had hoped that using a paid voice from Cereproc would prevent it from defaulting back, but it seems like I'm out of luck, unless I've just got something set wrong.

I do have 'Disable phonetic Speech' UN-checked, and I've confirmed that the Cereproc Jess voice can't handle a 'Pause', because creating a new script with a pause defaults back to the old default voice (with the appropriate timed pause), and deleting the 'Pause' script lets the new Cereproc voice read it, albeit without the pause. I've also emailed Cereproc to see if there is something they might be able to suggest.

Has anyone else had issues with a Cereproc voice, or can confirm that 'Pause', or other phonetic names are working? If need be I'll try and return the voice, and go with an Ivona voice...and hope that those will work better.
 
Last edited:
I just purchased the 'Jess' voice from Cereproc, installed it (SAPI 4.1 I think), and set it as the default Windows TTS voice. On some EDDI scripts, it uses the new Jess voice, but on any script that uses a 'Pause' script, and some others (seems to be ones that probably involve sation/system names) it defaults back to the old Microsoft TTS voice I was using. I saw in the 'General Troubleshooting' section over on the EDDI Github page that it will do that whenever the new voice can't deal with things phonetically. I had hoped that using a paid voice from Cereproc would prevent it from defaulting back, but it seems like I'm out of luck, unless I've just got something set wrong.

I do have 'Disable phonetic Speech' UN-checked, and I've confirmed that the Cereproc Jess voice can't handle a 'Pause', because creating a new script with a pause defaults back to the old default voice (with the appropriate timed pause), and deleting the 'Pause' script lets the new Cereproc voice read it, albeit without the pause. I've also emailed Cereproc to see if there is something they might be able to suggest.

Has anyone else had issues with a Cereproc voice, or can confirm that 'Pause', or other phonetic names are working? If need be I'll try and return the voice, and go with an Ivona voice...and hope that those will work better.

You may be able to fix the problem by removing the pause statements from the code which probably use incompatible XML to control your cereproc voice and replace them with your own in text tag pauses.
 
You may be able to fix the problem by removing the pause statements from the code which probably use incompatible XML to control your cereproc voice and replace them with your own in text tag pauses.
So delete the bits like {Pause(2000)}, and insert something else? I've been trying to look some of this stuff up in Google searches, but I'm not very knowledgeable about it. *Update* I've found one (slightly kludgy) work-around. Each period followed by a space seems to insert a roughly 1/4 second pause. If I string together enough period+spaces, I can tailor whatever length pause I need, and Cereproc Jess will read it. *2nd Update* The people at Cereproc got back with me, they say that there's a known issue with the tags in the latest version of the SAPI voices. They said they'll look into it and let me know when they have an update that will fix it.
 
Last edited:
Hi all,

I am utilising both VA & the EDDI speech responder for immersion. althogh there is some great advice on integrating the plugin via VA, but i cant find much info on scripting the responder (on calling events which have as of yet no script). For example, i want to get a "materials colected" script together in the form of:

{OneOf("you have collected", "collected", "picked up")} {material.collected}

I think you see my problem. Once i get a few scripts under my belt i think i'll be away.

Any help greatly appreciated.

Edit: looking at the journal manual, I think I need to add {event.MaterialCollected}. I'll try it out tomoz.
 
Last edited:
Hi all,

I am utilising both VA & the EDDI speech responder for immersion. althogh there is some great advice on integrating the plugin via VA, but i cant find much info on scripting the responder (on calling events which have as of yet no script). For example, i want to get a "materials colected" script together in the form of:

{OneOf("you have collected", "collected", "picked up")} {material.collected}

I think you see my problem. Once i get a few scripts under my belt i think i'll be away.

Any help greatly appreciated.

Edit: looking at the journal manual, I think I need to add {event.MaterialCollected}. I'll try it out tomoz.

There is a "Material collected" event in EDDI that fits what I believe that you are trying to do. Paste this in and see if it works for you.
{event.amount}
{if event.amount > 1:
{OneOf("units", "samples")}
|else:
{OneOf("unit", "sample")}
}
of
{P(event.name)}
{Occasionally(3,"have been")}
{OneOf("collected.", "gathered.")}
Unfortunately, the material type (element, data, etc.) is not available. I've gone with "samples" or "units" for describing the diverse materials that can be collected.
 
Last edited:
There is a "Material collected" event in EDDI that fits what I believe that you are trying to do. Paste this in and see if it works for you.
{event.amount}
{if event.amount > 1:
{OneOf("units", "samples")}
|else:
{OneOf("unit", "sample")}
}
of
{P(event.name)}
{Occasionally(3,"have been")}
{OneOf("collected.", "gathered.")}
Unfortunately, the material type (element, data, etc.) is not available. I've gone with "samples" or "units" for describing the diverse materials that can be collected.

Thanks for reply however I found this from the journal and thought that Eddi could read the journal entry for material type?

"MaterialCollected
When Written: whenever materials are collected
Parameters:
• Category: type of material (Raw/Encoded/Manufactured)
• Name: name of material

Examples:
“13:18:11": { "event":"MaterialCollected", "Category":"Raw", "Name":"sulphur" }
"
 
Feature request here:
Would it be possible to add a "test line" to the script editor? Maybe a single-line text entry box above the lower buttons with a "test" button beside it? Testing to see if the selected TTS voice will pronounce something correctly can be somewhat tricky with complicated scripts.
 
Thanks for reply however I found this from the journal and thought that Eddi could read the journal entry for material type?

"MaterialCollected
When Written: whenever materials are collected
Parameters:
• Category: type of material (Raw/Encoded/Manufactured)
• Name: name of material

Examples:
“13:18:11": { "event":"MaterialCollected", "Category":"Raw", "Name":"sulphur" }
"

There is a MaterialDetails function that takes the name of a material and provides full information. So in your 'Material collected' script you could have something like:

Code:
{set material to MaterialDetails(event.name)}
{material.name} is a {material.rarity.name}
{if material.category = 'Element':
    element with symbol {material.symbol}
|elif material.category = 'Manufactured':
    component
|else:
  piece of data
}.
(Disclaimer: I haven't tested the above so there might be syntax errors, but you should get the idea).
 
Strange. Disabling EDDI jumping and calling it via VA worked yesterday, today no more...
Looks like, when you have the rspond in EDDI disabled, you have to enable it first, restart EDDI, and disable again to get it working via the VA command. Can anyone confirm this?
jgm, would it be possible that (on load) disabled commands couldn't be called via VA? the 'test' function clearly doesn't work when disabled, it seems...
 
There is a MaterialDetails function that takes the name of a material and provides full information. So in your 'Material collected' script you could have something like:

Code:
{set material to MaterialDetails(event.name)}
{material.name} is a {material.rarity.name}
{if material.category = 'Element':
    element with symbol {material.symbol}
|elif material.category = 'Manufactured':
    component
|else:
  piece of data
}.
(Disclaimer: I haven't tested the above so there might be syntax errors, but you should get the idea).

JGM, in a future update would it be possible to expand the documentation on calling functions from EDDI? I don't understand the built in functions very well and I feel like I'm missing out on a very powerful tool.
 
JGM, in a future update would it be possible to expand the documentation on calling functions from EDDI? I don't understand the built in functions very well and I feel like I'm missing out on a very powerful tool.

What specifically are you after? All of the functions should be listed in the help, along with examples of their use.
 
Okay, so I've been writing a pretty verbose personality for the speech responder, but ran into a snag.
I want to extend the Jump event response with an option to tell the player if any of the items in the cargo bay are illegal in the destination system. I found no event, object or property in the raws that would get this info.
But, I found a property in the EDDB "stations.json" called "prohibited_commodities", that gives a list of the illegal wares at that station. The only problem is, I found no reference to this in the EDDI raws, the "station" object doesn't query for this property, so I can't access it from whitin EDDI. If it did, it would be possible to get the list of stations of the target system when jumping and then match the ship's cargo against the list of illegal wares for each station in the system.

Would it be possible to add this property to the station object?
 
Last edited:
I'm getting an "access to companion API has been lost" message. When I open the configuration and log into the companion app panel and then enter the confirmation code from ED, it still doesn't work. I get an error on the screen which I will post. I tried three times with the same result.

Oh well, I guess I won't because the forums says my little .JPG is an "invalid file". I'll upload to my cloud drive and link it here:

Edit: Never mind, I just tried again and it worked. Fourth time's the charm, I guess!
 
Last edited:
With the VA ((EDDI ship refuelled)) responder does is there a refueled from scooping only option? or a way for me to just make a variable of fuel just from scooping only?

I basically just want to know how much fuel I've scooped so I can make a another variable to get total fuel scooped.

oh and is the {DEC:Last jump} going to be active? seems no matter what its still comes up as [NOT SET]
 
Last edited:
Take a look at my post in the Script sharing forum. I have a script that will read off your fuel amount scooped or purchased.

https://forums.frontier.co.uk/showt...mands-Thread?p=4868424&viewfull=1#post4868424

I'm looking for the scooped amount viable in VA for a script I have in there.

Basically I want on ((EDDI Jumped)) to report how much fuel I've scooped only

Variables set with this events are as follows:

{DEC:EDDI ship refuelled amount} The amount of fuel obtained
{DEC:EDDI ship refuelled price} The price of refuelling (only available if the source is Market)
{TXT:EDDI ship refuelled source} The source of the fuel (Market or Scoop)
{DEC:EDDI ship refuelled total} The new fuel level (only available if the source is Scoop)
 
Last edited:
Status
Thread Closed: Not open for further replies.
Back
Top Bottom