Discussion COVAS: NEXT (AI Ship Integration)

We present to you the next iteration of COVAS - our Neurally Enhanced eXploration Terminal.
We've integrated the latest innovations in language processing and speech interfaces into our mod to allow you to have a natural conversation with your ship computer.
lQXJD6w.png

COVAS:NEXT listens to your voice through your microphone. When you speak, it processes what you’re saying and then responds.
You’ll hear that response through your speakers, allowing you to have a real-time conversation with your ship's computer.
You can give commands, ask questions, or even just chat, and a response will follow.

Certain things that happen in the game can also trigger a response from COVAS:NEXT. In addition to this you can let it control specific ship actions through keyboard binding upon your request.
Ship actions will only ever trigger if requested by voice and not by any game events.
The AI will decide which actions to run based on the context of the conversation and your instructions.

You can join our Discord to give feedback, ask questions or just talk about the mod: Join Discord Server
We are completely open-sourced, we welcome contributions and support: Project on Github
All current API services are listed in our Discord, along with donation links to their maintainers.

You will need a Large Lange Model provider to use this mod. This can be done free and locally, as well as via paid online services. We recommend using OpenAI to use the fastest and highest-accuracy models.

________________________________________
Technical Insights:
We use Speech-to-Speech, Text-to-Speech and Large Language Models.
We use the reasoning capabilities of the LLM to perform tool calls, these can be:
  • Taking a screenshot and analyzing it in regards to your question
  • Online API to fetch data, using the known variables as API request parameters
  • Performing ship actions
Tool calls are only enabled in response to voice commands.
tnDzfj4.png
 
Last edited:
Randomly in debug i see CMDR call (some station) and set course to HP-whatever and the AI tries to set course and go, but i am in Colonia....
 
Randomly in debug i see CMDR call (some station) and set course to HP-whatever and the AI tries to set course and go, but i am in Colonia....
Is it Wakata Station in HIP 23716? If yes, then it's a whisper hallucination. That would happen if the microphone triggered but didn't pick up anything. We bundle a speech recognizer into COVAS:NEXT to identify relevant audio chunks (complete sentences) before even passing them to whisper, so this shouldn't happen.
Do you have a noisy background, maybe with people talking? I would recommend using the push-to-talk option that can be toggled in the UI and see if that fixes it!

Edit: Also, feel free to peek into our Discord to get faster support. I rarely look here, but we have a great community there!
 
Last edited:
Back
Top Bottom