In-Development TradeDangerous: power-user trade optimizer

In the mfd I was thinking just have the station to dock at and the commodity to buy. But after looking at the mfd I soon realized some station names wouldn't fit in one line so disregard. What I've been doing is just running in a window with the dos box to the side and just use that like it's my mfd :) As for the saitek situation, that's too bad. A bit dissapointed in their support for us gamers.
 
In the mfd I was thinking just have the station to dock at and the commodity to buy. But after looking at the mfd I soon realized some station names wouldn't fit in one line so disregard.

Actually, it will scroll marquee style, so that's doable, just have to give me an idea of what you want presenting - I've not used it in a long while myself because I was mostly using it for the jump list, and now I just let the in-game route planner guide me.

-Oliver
 
D:\elite\tradedangerous>c:\python34\python.exe trade.py run --from "cd-75 661" --credits 20000000 --capacity 268 --ly-per 16
CD-75 661/Kirk Dock -> CD-75 661/Kirk Dock
CD-75 661/Kirk Dock: 268 x Imperial Slaves,
CHOTEC/Marques Dock: 268 x Slaves,
CD-75 661/Kirk Dock +660 084cr

The problem is that CD-75 661/Kirk doesn't buy slaves and there is no black market. I use the maddavo plugin. Broken market data?
 
D:\elite\tradedangerous>c:\python34\python.exe trade.py run --from "cd-75 661" --credits 20000000 --capacity 268 --ly-per 16
CD-75 661/Kirk Dock -> CD-75 661/Kirk Dock
CD-75 661/Kirk Dock: 268 x Imperial Slaves,
CHOTEC/Marques Dock: 268 x Slaves,
CD-75 661/Kirk Dock +660 084cr

The problem is that CD-75 661/Kirk doesn't buy slaves and there is no black market. I use the maddavo plugin. Broken market data?

Not "broken" just out of date, probably. If demand reaches zero, it either shows up as having "0L" demand in-game or the item dissapears entirely.

"trade.py update -GF cd75/kirk" and put a "0" in the slaves price, then run "misc/madupload.py" or "misc\madupload.py" (linux vs windows) and it'll go away shortly.
 
Attention Explorers ... those of you who are exploring systems not yet in the database, I have added a small tool for submitting distances to EDSC.

It's nothing special - it just prompts you to enter distances to particular stars or enter your own star names.

But it DOES copy the names into the "clipboard" (paste buffer) so that you can alt-tab to the game, paste the name into your search box and when the map is done zooming, it'll tell you the distance. BOOM.

The script is "submit-distances.py" in the trade directory. It's a little messy, if anyone feels like cleaning it up... :)
 
Long time user, infrequent commenter. OOh, I so love the way this is going. But, just before I supply some updates to Station.csv, a question: do I have to load an entire Station.csv to http://www.davek.com.au/td/default.asp, or can I just load the handful I've paid attention to? (and thus not risk resetting anybody else's changes).

And then... there's something which is supplying duplicate data to maddavo. I was comparing a downloaded Station.csv with my own copy, before loading my own changes back up.

What I'm seeing in the download Station.csv includes:
'37 GEMINORUM','Chasles Port',254,'N','L'
'37 GEMINORUM','Davidson Hub',254,'N','M'
'37 GEMINORUM','Frechet Ring',254,'N','L'
'37 GEMINORUM','Grant Gateway',254,'N','L'
'37 GEMINORUM','Hinz Station',254,'N','L'
'37 GEMINORUM','Stairs Terminal',254,'N','M'
'37 GEMINORUM','Utley City',254,'N','M'
'COQUIM','Hirayama Installation',606,'N','L'
'COQUIM','Phillips Hub',606,'N','M'
'CPD-70 2439','Godwin Dock',2026,'Y','M'
'CPD-70 2439','Szentmartony Dock',2026,'Y','M'
'CROWFOR','Ewald Station',594,'N','L'
'CROWFOR','Szebehely Port',594,'N','L'
'CZERNOVALE','Berliner Enterprise',243,'Y','M'
'CZERNOVALE','Evans Enterprise',243,'Y','M'
and so on (there are many of these). I doubt that these stations are identical distances from their stars (and I know that Ewald has Medium pads, and Szebehely has Large). I haven't looked at the prices for the associated stations, but I'd take that with a pinch of salt too: if station data is being duplicated (where it's otherwise missing), then it seems only too likely that prices could be as well. (Which may not eb a bad thing, in the short term: if players are encouraged to visit places they wouldn't otherwise go, then the prices will be corrected all the sooner).

My inclination is to treat all such duplicated distances as suspect, and either data should be reverted to a reasonable historic state, or they should be switched back to unknown, and checked again.

ETA: Oh. Another thought has occurred to me, that maybe the EDDN data is fine, but there's a problem in maddavo's receiving code, which maybe is hanging on to old values when there a blanks in the new record, and then reusing those old values against the new station. Just a passing thought (since I've seen things like that before). At least that would mean a fix in just one place :)
 
Last edited:

wolverine2710

Tutorial & Guide Writer
Well, I was actually thinking it might be useful for you to act as a hub in both directions - when you process a .prices file, EDDN-transmit the entries you received via uploads out to EDDN. I think that jibes well with your transmitting old updates periodically.

The problem is the data, especially using their one-item-per-message, gets ridiculously large.

https://www.dropbox.com/personal/Public/ed/eddntransform (the scripts are included in the directory)

Code:
osmith@WOTSIT ~/Dropbox/public/ed/eddntransform
$ ls -1sh
711k mad.2d.prices
4.8M mad.all.prices
2.9M eddn.2d.json
 20M eddn.all.json
551k batched.2d.json
3.7M batched.all.json
...

I'm guessing you culled the data recently.

The jist is: ~700kB of .prices data becomes 2.9MB of eddn data, 5MB of .prices data becomes 20MB of eddn data.

Alternatively, using a terse, batched form, 700kB of price data becomes 500kB and the savings scale up to the full dump.

I realize this doesn't matter much to your service, but it's going to matter a lot to the EDDN network as the data grows. If you send 20MB of historical updates to EDDN and EDDN has 500 subscribers, it is going to have to buffer and transmit - assuming no retrans - 26MB (mega bytes) of data per user, for a grand total of 12GB of data.

If you have a 10Mb/s (note lower b) internet connection and you can get full download speeds, it'll take you 20 seconds to download that. But most users are going to be getting under a Mb/s on the download, due to the way ZeroMQ's fan-out works, which means they can anticipate your update clogging up the EDDN network for them for 3 and a half minutes... This, in itself, will cause queueing issues that are going to draw it out to 5 minutes.

This means a really good chance, at internet scale, of lost data, individual price updates.

And this is with a fairly tiny portion of the full ED dataset and a small number of listeners. Heck the data I just tested this with was small relative to the data from your site yesterday.

Of course, if EDDN mostly becomes a mechanism for developers to exchange prices and most users actually get their data from sites like your own, then that dials-back the scale some, but it's still kind of insane, IMSE, going forward. We have 110k prices in the database, in a few months we'll probably have pipped a million. The sheer overhead of the current EDDN json format is an order of magnitude more than the actual data.

It isn't going to scale.

-- EDIT --

Also, be aware that ZeroMQ's "pub/sub" model of broadcasting isn't "free", the sender is actually having to send individual copies of the data to each subscriber, and further ZeroMQ's pub/sub requires receiver-side filtering ... so clients can opt-out of seeing your update but they actually still have to download it.

Thanks for this response. You said something similar yesterday in the EDDN thread. I've responded there. Otherwise this thread could get derailed.
 
Hey kfsone, came across this a couple of weeks ago and forgot to post it. The UI gave me an "outside of expected tolerances" validation message when I entered consumer tech for this station, and asked me to post it here. My jpg is too large for the forum group though, so will put it on the facebook page instead.

I suspect this may have been a testing system/station given the name and some of the commodity wackiness going on here that might have been left in at launch, not sure.
 
Long time user, infrequent commenter. OOh, I so love the way this is going. But, just before I supply some updates to Station.csv, a question: do I have to load an entire Station.csv to http://www.davek.com.au/td/default.asp, or can I just load the handful I've paid attention to? (and thus not risk resetting anybody else's changes).

Curious to know too; I've done both things (upload individuals and upload bulk).
and so on (there are many of these). I doubt that these stations are identical distances from their stars (and I know that Ewald has Medium pads, and Szebehely has Large). I haven't looked at the prices for the associated stations, but I'd take that with a pinch of salt too: if station data is being duplicated (where it's otherwise missing), then it seems only too likely that prices could be as well. (Which may not eb a bad thing, in the short term: if players are encouraged to visit places they wouldn't otherwise go, then the prices will be corrected all the sooner).

My inclination is to treat all such duplicated distances as suspect, and either data should be reverted to a reasonable historic state, or they should be switched back to unknown, and checked again.

I've seen systems with 2 or more stations that were at the same zero-decimal-place distance from their star; but more likely is just that someone was lazy and gave the details for the first system and then just re-used the same add command for the remaining stations because they thought they were similar/close enough?
 
Uploading Station.csv - How merging of data works

So a couple of questions re Station.csv data merging and I've also received quite a few messages about it. I've put some info on the site but in summary this is how it works.

Uploading
You can upload a big Station.csv from your TD data directory, OR a small <any-name>.csv - it doesn't matter. All that is REQUIRED is the first line that has the Station.csv schema. That's how it knows it is a Station.csv-format file.

Merging Data
Any new stations are added IF the System exists in our System.csv. In theory, we have been given a list of all the systems that have stations, this weeds out typos.
Any existing stations are updated if the ls_from_star, blackmarket or max_pad_size change from default values.
If ls_from_star is within 10Ls or 1% then it is ignored.
Any other conflicting data is NOT merged into the database - it is flagged for checking. I am writing a crowdsourcing 'fix stations' page where cmdrs can help check stations and choose which data is correct.

Station Distance - ls_from_star values
Some points about station distance:
  • The ls_from_star value changes. The game is real-time and orbiting objects in a system move. Although they move slowly, they DO move. If you have a station orbiting a planet then the ls_from_star will oscillate around an average value (the planet's orbital distance). If you have a station orbiting a moon orbiting a planet, then the ls_from_star will oscillate around the moon's oscillation around an average value (also the planet's orbital distance). Things get even more complicated when binary or n-ary planets or stars are involved.
  • It IS very possible to have stations in a system with the same ls_from_star values. If they are orbiting the same body, then their average ls_from_star will be the same. But also if they are orbiting a moon or planet that is orbiting another body (eg: <star> B, C or D) then their average ls_from_star will also be the same. eg: in SOL: Abraham Lincoln, M.Gorbechev, Li Qing Jao all orbit Earth so they have the same ls_from_star value. Earth's orbit is 1AU so their ls_from_value is 499 (or close to it). Galileo orbits the moon, the moon orbits earth, so Galileo's ls_from_star value is also 499. In our Station.csv the values are 505-506, fair enough.
  • The System View can be used to get the distance in many cases. eg: If there is a station orbiting a planet orbiting an A star then select the planet and look at the orbit semi-major axis value. The elliptical orbit is generally close to a circle and this can be used. The value is in AU (Astronomical Units). 1 AU = 499Ls . So multiply this value by 499 and you have the ls_from_star. If there are binary (or n-ary) stars in the system and the station is orbiting B or above, then you can't use the System View. You have to know the distance of the n star from the A star which isn't shown in the System View - you have to visit the system. So once you visit the system, you may as well just get the distance of the station from the nav panel.
  • The System View cannot be used if the station is part of the orbital hierarchy of a binary or n-ary planet or part of the orbital heirarchy of a binary or n-ary star B or higher. This is because the reported semi-major axis value is for the orbit of the partner body - it does not show the orbit of the collective COG to the parent body (frustrating).
  • When you jump into a system, you jump in at the A star. This is the one with the Nav Beacon. But you jump in near it, not in the centre of it. So if you jump in and 'stop' (still going 30m/s) and look at the nav panel then potentially you could be on the near side or the far side of the star from the station you're looking at. This introduces another oscillation into the ls_from_star value if you measure it with the nav panel in this way.
  • Some people land at the station, then look at the nav panel and choose the distance to the Nav Beacon. It's probably better to choose the A star.

So basically if we all measure a station's ls_from_star value by various means and we're within about 10Ls or 1% of each other then I think that's good enough. It's certainly good enough for the purposes for which we use it in TD.
 
I've started adding some code to the TD import to try and fend off the increasing amount of OCR Derp of system names ... It just flat out won't accept anything called "OOCK", for instance (try searching for it in-game, and you'll see it doesn't match anything).

If you grab the latest code, you may see messages like this:

Code:
Connecting to server: http://www.davek.com.au/td/prices-2d.asp
import.prices: 4,093,767/4,093,767 bytes |  49.02KB/s | 100.00%
NOTE: 3 stations updated:
GALI/Chadwick Dock, GEORGE PANTAZIS/Zamka Platform, CD-72 190/Phillips Terminal
[b]NOTE: Ignoring '171 G. AQUARII/ELCANO OOCK' because it looks like OCR derp.[/b]

Don't panic. You can either submit a delete for that station to maddavo or just wait until it gets cleaned out. If you find the messages annoying, you can add a "-q" to your import.
 
Thanks to Dirk Wilhelm for the "shipvendor" command for adding/removing ships. For more, see "trade.py shipvendor --help" (and you can just type 'trade.py ship' for short).

Code:
v6.8.0 Jan 19 2015
. (Dirk Wilhelm/kfsone) Added 'shipvendor' command,
. (kfsone) Issue #135 show data age in checklist / on mfd
. (kfsone) Issue #140 explain what/why 'requests' is required,
. (kfsone) Issue #142 "--stations" with "nav" command caused error,
. (kfsone) Added code to fight OCR Oerp,
. (kfsone) '--ls-max' will now also unknown (0) distances,
. (kfsone) Added '--max-routes' for setting an absolute max on how many
        of the top routes we use after the first hop,
. (kfsone) Added 'submit-distances' tool for submitting EDStarCoordinator
        star data.
. (kfsone) Added '--prune-score' and '--prune-hops' to run; these let
        you discard routes that are under-performing early on which
        can make calculating longer runs more efficient.
. (kfsone) Added "--progress" to "run" to show the current hop,
. (kfsone) "run" and TradeCalc are much smarter about which hops
        they will consider when using --max-age, --blackmarket, etc.
        (big perf win)
. (kfsone) Renamed "misc/edstarquery.py" to "misc/edsc.py"
        - Added misc.edsc.StarSubmission
        - Renamed misc.edsc.EDStarQuery to misc.edsc.StarQuery
        - Changed "submit-distances" to use "StarSubmission"
 
Maddavo would you consider EU/US mirror (or even both) for your market data as your site maxes out at 90kb/s transfers on a good day (my last just now was max 40kb/s). On today's internet that's really bad especially when the 2 day prices file already is 4mb in size, as you have to (to keep yourself updated because of EDDN) update this many times a day.

We were talking with themroc that setting up mirror would be easy if you would let someone/us to do that.

As a comparison, when I leech 27mb worth of json EDDB data from the API page, its so fast I can barely get to see the mb/s speed before my wget window closes after transfer is completed ;)
 
Maddavo would you consider EU/US mirror (or even both) for your market data as your site maxes out at 90kb/s transfers on a good day (my last just now was max 40kb/s). On today's internet that's really bad especially when the 2 day prices file already is 4mb in size, as you have to (to keep yourself updated because of EDDN) update this many times a day.

We were talking with themroc that setting up mirror would be easy if you would let someone/us to do that.

As a comparison, when I leech 27mb worth of json EDDB data from the API page, its so fast I can barely get to see the mb/s speed before my wget window closes after transfer is completed ;)

Should be easy enough to push it up to S3 or similar. Then add a bit of CloudFront if you want it distributed as well.
 
That is great to hear Maddavo, please keep us updated :)

Hey kfsone,

There is small issue with Ls to star setting, here is Station.csv:
Code:
'LHS 3447','Worlidge Terminal',109974,'Y','L'
And here is what trade.py outputs:
Code:
trade.py run --cr 10000000 --cap 532 --ly 12.02 -vvv --pad l --avoid alioth
  Load from LHS 3447/Worlidge Terminal (0.00ly/star, Yes/bm, Lrg/pad):
Because its displaying the value as zero, it ignores the --ls-max setting too, right? (unless the profit is so good, it passes the Ls max 'score' thingy)
 
Nice little error for you kfsone...

trade.py run --ch -vvv --det --ins 400000 --ju 3 --ho 7 --cr 9500000 --ly 17.0 --cap 100 --max-d 8 --fr "tranq" --to "jameson mem" --uni
\trade.py: ERROR: Can't have same from/to with --unique
 
With the new shipvendor command we need a way to share the ShipVendor.csv, i guess.

Maddavo, is it possible to add an upload/download of the shipvendor.csv to your site?
 
That is great to hear Maddavo, please keep us updated :)

Hey kfsone,

There is small issue with Ls to star setting, here is Station.csv:
Code:
'LHS 3447','Worlidge Terminal',109974,'Y','L'
And here is what trade.py outputs:
Code:
trade.py run --cr 10000000 --cap 532 --ly 12.02 -vvv --pad l --avoid alioth
  Load from LHS 3447/Worlidge Terminal (0.00ly/star, Yes/bm, Lrg/pad):
Because its displaying the value as zero, it ignores the --ls-max setting too, right? (unless the profit is so good, it passes the Ls max 'score' thingy)

No

Nice little error for you kfsone...

trade.py run --ch -vvv --det --ins 400000 --ju 3 --ho 7 --cr 9500000 --ly 17.0 --cap 100 --max-d 8 --fr "tranq" --to "jameson mem" --uni
\trade.py: ERROR: Can't have same from/to with --unique

Fixed.
 
Back
Top Bottom