Release EDDN - Elite Dangerous Data Network. Trading tools sharing info in an unified way.

Status
Thread Closed: Not open for further replies.
Okay its fixed now, thanks for your help seeebek, much appreciated.

Here is the working client.py at its full, outputting valid JSON:
Code:
import zlib
import zmq.green as zmq
import simplejson
import sys


def main():
    context = zmq.Context()
    subscriber = context.socket(zmq.SUB)

    subscriber.setsockopt(zmq.SUBSCRIBE, "")
    subscriber.connect('tcp://eddn-relay.elite-markets.net:9500')

    while True:
        market_json = zlib.decompress(subscriber.recv())
        print market_json
        sys.stdout.flush()

if __name__ == '__main__':
    main()
Hope that helps anyone else struggling with same issue and when Jamesrecmuscat returns, he can merge it to the official EDDN github .py
 
Okay its fixed now, thanks for your help seeebek, much appreciated.

Here is the working client.py at its full, outputting valid JSON:
Code:
import zlib
import zmq.green as zmq
import simplejson
import sys


def main():
    context = zmq.Context()
    subscriber = context.socket(zmq.SUB)

    subscriber.setsockopt(zmq.SUBSCRIBE, "")
    subscriber.connect('tcp://eddn-relay.elite-markets.net:9500')

    while True:
        market_json = zlib.decompress(subscriber.recv())
        print market_json
        sys.stdout.flush()

if __name__ == '__main__':
    main()
Hope that helps anyone else struggling with same issue and when Jamesrecmuscat returns, he can merge it to the official EDDN github .py
This is not a fix. Just a workaround for you since you want to work with the text coming directly from EDDN. Most people using python will find the way with json.loads better as you get a valid python dict with easily accesible fields. Not everybody wants to parse the text by themselves.
 
As I said I've never written single line of python or JSON, but when I tried the client.py json output into the lint checker it said INVALID. So I don't understand how its not a fix when client.py is changed so it outputs valid json?

I'm not trying to parse the text by myself, I'm trying to read the EDDN stream with Qt QJsonDocument class and it will not accept that invalid JSON.

But again, I'm not even a newbie, I'm totally outsider for python and json so... heh.
 
That sounds like a *lot* of manual correction!
At the end of the day, I don't know personally which one of
Naddoddur Terminal
Nadoodour Terminal
Naooooour Terminal
Nadooddur Terminal

is correct - am I meant to fly there and check them myself!?

Short answer, yes you are. Longer answer, why do you need the 100% correcly spelled station name for before you visit a system? Trade tools, as in max profit searching, dont care about spelling. That only become important when you actually start using a trade route. Or need to visit a station. At wich point you DO visit the system and can correct any spelling in whatever tool you are using.
 
As I said I've never written single line of python or JSON, but when I tried the client.py json output into the lint checker it said INVALID. So I don't understand how its not a fix when client.py is changed so it outputs valid json?

I'm not trying to parse the text by myself, I'm trying to read the EDDN stream with Qt QJsonDocument class and it will not accept that invalid JSON.

But again, I'm not even a newbie, I'm totally outsider for python and json so... heh.
Ok. So to explain the situation. The command json.load takes valid json and creates python dict, if printed or casted this dict contains '. So in your case you take directly the valid json and put it to QJsonDocument without the need to parse it to a dict before.
So your QJsonDocument does for you what json.loads does for me.
If you need any more help on python, let me know. A PM is always welcome.
 
Short answer, yes you are. Longer answer, why do you need the 100% correcly spelled station name for before you visit a system? Trade tools, as in max profit searching, dont care about spelling. That only become important when you actually start using a trade route. Or need to visit a station. At wich point you DO visit the system and can correct any spelling in whatever tool you are using.
Historical analysis does care about this, because otherwise you're dealing with two different stations that differ only in spelling. That said, historical analysis probably also provides the answer to the problem, because with enough valid data, the typos will be self-correcting via consensus.
 
Why would you do that, if I find a good place to buy something super cheap or sell really high I am not telling anyone.

I don’t understand...? i was buying Tea for a very low price then selling it and making 3000. a trip. 380 x 8 = 3040 per trip and the stations were only 105Lys so was taking around 5 minutes to complete each trip. Problem is now the servers are not updating in unison the price of Tea is the same at every station i have been to. if the market changes like this then what is the point of the EDDN when you get to the Starport its commodities are most likely going to have changed.
 
I was still getting this "UnicodeEncodeError: 'ascii' codec can't encode character u'\xe8' in position 339: ordinal not in range(128)" error in my code quite often - it would work for a few hours then die, obviously on someone passing non-ascii code and I'm nt dealing with it.
Tried various things, but none were working.
Eventually found this solution - now I'm not saying it's an ideal solution, but at least it works and has kept my script running for a good 48hrs or so now...

In header: from django.utils.encoding import smart_str, smart_unicode (using pip to install django)
Then in main code:
do_insert = smart_str(insert_stmt % data)
cur.execute(do_insert)

I've tried using u in front of strings, and .encode('utf-8') behind strings, but none of them seem to work. But this smart_str function does, so I'm sticking with it for now!


Chris.

EDIT: Got almost 800 known systems in my database now and 100,000 rows of data. Will have to look at archiving some off in a few days at this rate!
Another EDIT: I'm new to github also, but just added my first one in there - https://github.com/chrislongbone/EDDN_test , so you can at least see my source code.
 
Last edited:
Historical analysis does care about this, because otherwise you're dealing with two different stations that differ only in spelling. That said, historical analysis probably also provides the answer to the problem, because with enough valid data, the typos will be self-correcting via consensus.

If you are willing to add an extra step, the problems resulting from different spelling may be reduced. If you take the system/station names from TGC and compute the Levenshtein distance (or some other similarity measure) between the names coming from EDDN and the TGC supplied ones, you will be able to match much more than by just comparing strings.
The same applies also to item names btw, but unfortunately not to numbers, since there is no known set to match against.

Have a nice day,
L.P.
 
Historical analysis does care about this, because otherwise you're dealing with two different stations that differ only in spelling. That said, historical analysis probably also provides the answer to the problem, because with enough valid data, the typos will be self-correcting via consensus.

Then merge them. The alternative is to get no data at all.

In the ideal situation, FD would start dumping a csv in the log directory complete with system name, coordinates, dock name and all the market information. Then this age of ED:OCR will be history.
 
Last edited:
I'm working on an app that uses the EDDN. I thought I'd repost here in case it's of use to anyone here (eg C# code for others to use). I'll read through the thread shortly to make sure I'm reading from the correct version, etc

As time permits, aka slowly, I'm hoping to work on a Windows application (C# WPF using Visual Studio 2013 Community Edition) to help with mapping/navigation. It's early days yet, but I've made the source available on Github for possible collaboration.


So far I have it reading the systems.json file into a database and monitoring the log so if I enter a system that isn't in the database it uses TTS to warn me the system needs adding.


I've also got it reading data from EDDN.

My goals moving forward are:

  • Store the EDDN data in the database.
  • Provide a report of any systems in the EDDN data that aren't in the navigation data.
  • Provide a way to add systems/stations/etc
  • Provide a way to generate a systems.json file
  • Integrate with the EDStarCoordinator website via its API to both find new systems and submit to the site via the app
  • Implement some form of shortest path navigation aid similar to Elite Copilot without the need for Python and a huge executable.
  • Provide a way to search the EDDN data for handy trade routes, etc

NOTE: I plan to use SharpDX to handle any Vector mathematics.


If anyone out there would be interested in collaborating/helping and has experience with C# and WPF please do get in touch.
 
In theory every system with an economy was in the data dump FD gave us. Be very careful about supposed new systems with stations.
 
Let's say we got a database to queue against with: 1) names of systems, 2) name of stations pertaining to each system; to retrieve and/or valdiate station names by 3rd party apps.

Asking for ideas about how could we go about introducing and validating new station names. (New systems processing would be left to EDSC.) I've some idea like each station name is written in a temporal table/document in the db, each subsequent recording of the station name in the same system by an other independent user (identified through the uploaderID) adds to the strength of the data. When the strength reaches 3-5 it's dumped into the main db (somehow similar to how EDSC works with systems).
 
Last edited:
Let's say we got a database to queue against with: 1) names of systems, 2) name of stations pertaining to each system; to retrieve and/or valdiate station names by 3rd party apps.

Asking for ideas about how could we go about introducing and validating new station names. (New systems processing would be left to EDSC.) I've some idea like each station name is written in a temporal table/document in the db, each subsequent recording of the station name in the same system by an other independent user (identified through the uploaderID) adds to the strength of the data. When the strength reaches 3-5 it's dumped into the main db (somehow similar to how EDSC works with systems).

It's called Trilateration or for extra confidence Multilateration.
 
Last edited:
Jamesrecmuscat seems to be unavailable, does anyone else know if its EDDN itself or only the client.py which generates invalid JSON output? And of course if its the .py how to fix it? :)

The forums decided to stop notifying me of replies to this thread...

FWIW, client.py was never intended to print valid JSON. It was dumping the Python object model containing the parsed JSON data. If you just want to dump JSON to the console, then yes, do as you suggest and just print the contents of the market_data variable. If you want to actually do anything with the data, use the parsed data ;)
 
It's called Trilateration or for extra confidence Multilateration.

Eeeer not is not, I wasn't talking about system coordinates. I was asking about the STATION NAMES pertaining to confirmed systems. Anyway, got a system more or less working already requiring the input of the same station name from different users several times before it gets added to the db.

Did some clean up of erroneous station names that were being added blindly. Hopefully we can integrate it with EliteOCR in a few days so data will come a bit cleaner, but more cleaning up is probably necessary (have around 2400 station names compiled so far).

BTW if anyone is wondering there are tons of duplicate station names, so definitively they are not unique. (to make things worse.)
 
Last edited:
Status
Thread Closed: Not open for further replies.
Back
Top Bottom