In-Development TradeDangerous: power-user trade optimizer

Why would people use the update command when EliteOCR generates a .prices file that they can use with the import command?

I can only make informed guesses, https://forums.frontier.co.uk/showthread.php?t=34986&p=1681449&viewfull=1#post1681449.

I could be wrong. But the shoe fits.

Did not know about madupload.py. How do I invoke it?

Code:
python -m misc.madupload
or
python -m misc.madupload filename

You can use it to upload your Station.csv, your .prices file, but by default it uploads "updated.prices".

Question: If I use trade.py import --plug=maddavo --opt=syscsv --opt=stncsv and my local system.csv and station.csv have entries for systems and stations that maddavo does not yet know about, will they be lost from my local file/database?

Yes. I chose poorly when I named this "import". It replaces your local .csv files with the files it downloads and then rebuilds the cache, so any local additions are lost.
 
Now most people are using EliteOCR to capture prices, and only capture part-stations, so what to do? How do we get rid of dead items easily?

I think what you're doing is fine, so long as you are sending TD station-at-a-time and not "updates since a certain time". You may want to record deletions and actually send them explicitly, but that's up to you and not currently necessary.

Beyond that, I think we should probably find out where these partial station uploads are coming from (if that's what they are, I did delete a lot of prices the other day but this was happening before that). If they are, in-fact, from TD-processed OCR imports, well that counts as my fault for lousy documentation and clarity on the process. If you think about it, they're having a lousy experience: they're jumping through needless hoops for want of a simple explanation/mechanism to do what should be a basic process for them.

(Wow the mature language filter here is aggressive)
 
Great. Someone's OCR tool is starting to misrecognize 'E's at 'F's, 'B's as 'P's, adding extra spaces ("H U B", "SETTLE MENT"), we're seeing an increase in 'O's mis-recognized as 'D's now. The system/station name OCR appears to be pretty terrible.

At this point, the cost of derp-checking the station.csv is becoming really expensive. It's increasingly looking like TD itself shouldn't ship with System or Station data, just refer you to one of the several potential sources of data and make it possible to import other people's data (and I mean "import" rather than "load").
 
Last edited:
2. The question of whether your .prices files are "per station" or "per item": there was a transition period where it became likely that the .prices in the 3h file were only containing items that had been updated, and TD's import method treats missing items as deletions. Can you confirm that the current version emits station-at-a-time?

I definitely output whole stations into the files.

The query looks for the latest timestamp at a station and then considers that as the station timestamp. The station timestamp is then used to filter the stations needed in the 3h and 2d files. Then the prices for those stations are dumped. Sounds complex, but it's all one big query and only takes a second or two to run AND write the 3h files. The 2d file takes 19 seconds.

The full price file is an easier query, but takes about 4m20s to write the file.
 
Using this tool for ploting a route but getting an error.

eg:-

trade.py nav mok/beth herc/mendel --ly 8.56

results in:-

NOTE: Rebuilding cache file: this may take a moment.
Traceback (most recent call last):
File "D:\XXXX\Elite Dangerous\kfsone\trade.py", line 102, in <module>
main(sys.argv)
File "D:\XXXX\Elite Dangerous\kfsone\trade.py", line 52, in main
tdb = tradedb.TradeDB(cmdenv, load=cmdenv.wantsTradeDB)
File "D:\XXXX\Elite Dangerous\kfsone\tradedb.py", line 537, in __init__
self.reloadCache()
File "D:\XXXX\Elite Dangerous\kfsone\tradedb.py", line 607, in reloadCache
cache.buildCache(self, self.tdenv)
File "D:\XXXX\Elite Dangerous\kfsone\cache.py", line 998, in buildCache
processImportFile(tdenv, tempDB, Path(importName), importTable)
File "D:\XXXX\Elite Dangerous\kfsone\cache.py", line 885, in processImportFile
for linein in csvin:
File "D:\Python34\lib\codecs.py", line 313, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 7087-7088: invalid continuation byte
 

wolverine2710

Tutorial & Guide Writer
Might be of interest for Kfsone as well as maddavo. RedWizzard has cleaned up the TGC data and stored it in TOR (The One Reference). You can find his post here. Its in TGC format. Also read the post beneath it, it explains that one has to do one extra thing to make the JSON REALLY compatible with the TGC JSON format.

Note: Tornsoul, author of TGC is MIA since the 16th of December.

Edit: Before came into play TOR from RW was THE reference.
 
Last edited:
python -m misc.madupload
or
python -m misc.madupload filename
[/code]

You can use it to upload your Station.csv, your .prices file, but by default it uploads "updated.prices".


Yes. I chose poorly when I named this "import". It replaces your local .csv files with the files it downloads and then rebuilds the cache, so any local additions are lost.

This explains a lot about why I've been losing data. I had thought that import was a merge and timestamp sensitive. So, the trade.py import command will get the whole .prices file or the 2d file. If I run that command (with the .csv switches) I need to ensure that:


  1. I have used misc.madupload to send my OCERed data up to maddavo (I have added my OCRed data to local TD myself, I don't want to lose it.)
  2. I have used misc.madupload to send my station.csv too (if I have added or updated any stations sineme last import)
  3. I have waited long enough for maddavo's site to have processed my uploads before I attempt another import, or risk losing them. (I know that the import command compares the maddavo timestamp, but there is no guarantee that the next 2 hourly cycle will contain my updates, nor for that matter the next 06:00 generation either.)

That seems a little circuitous. But I want to use my new data NOW (locally), not after maddavo has processed it.

Does there need to be a merge command? If not, do I need to use the --download switch and visually inspect the files before I decide whether I want to use them? Where are they downloaded to?

Edit: I feel as though I may be interpreted as being overly critical and negative. Let me just say then that it's a fantastic effort by you both, I probably would not be playing E:D without it. I hope you (Oliver, Dave) see my input as constructive.
 
Last edited:
This explains a lot about why I've been losing data. I had thought that import was a merge and timestamp sensitive. So, the trade.py import command will get the whole .prices file or the 2d file. If I run that command (with the .csv switches) I need to ensure that:


  1. I have used misc.madupload to send my OCERed data up to maddavo (I have also added my OCRed data to local TD myself, I don't want to lose it.)
  2. I have used misc.madupload to send my station.csv too (if I have added or updated any stations sineme last import)
  3. I have waited long enough for maddavo's site to have processed my uploads before I attempt another import, or risk losing them. (I know that the import command compares the maddavo timestamp, but there is no guarantee that the next 2 hourly cycle will contain my updates, nor for that matter the next 06:00 generation either.)

That seems a little circuitous. Does there need to be a merge command? If not, do I need to use the --download switch and visually inspect the files before I decide whether I want to use them? Where are they downloaded to?

Edit: I feel as though I may be interpreted as being overly critical and negative. Let me just say then that it's a fantastic effort by you both, I probably would not be playing E:D without it. I hope you (Oliver, Dave) see my input as constructive.

What I do is I just import once when I start playing.
After every station I update, I upload with madupload.
At the end when I'm finished playing, I upload station.csv.

Next time I play and import, maddavo usually contains my submitted stations.
 
What I do is I just import once when I start playing.
After every station I update, I upload with madupload.
At the end when I'm finished playing, I upload station.csv.

Next time I play and import, maddavo usually contains my submitted stations.

Similar yes, so are you not updating your local data when you update a station?

It's the 'usually' that does concern me. I need to consider a backup strategy.
 
You're over-complicating it a bit there, Dry. What I tend to do is this:

. Pull from mad periodically, e.g. before I dock,
. Update prices for the station I'm at,
. misc/madupload,

Mad will automatically add a "stub" entry for any stations you upload prices for, the only time you need to upload your Station.csv is when you have extra data:

Code:
trade.py station -a "someplace/somewhere"  # No need to puload
trade.py station -a "someplace/otherwhere" --ls=123 --pad=m # upload-worthy
trade.py station -u "someplace/otherwhere" --bm=n # upload-worth
trade.py station -a "someplace/somewhere" --ls=0 --pad=? --bm=? # not upload-worthy

In terms of "import". When you use the 'update' command, you're actually creating a small .prices file which TD then "imports". Everything else about how "import" works is based on this. Do this:

Code:
trade.py update --notepad Sol/Abraham Lincoln

That text you are seeing? That's an import file. It's just plain, clear, text, designed to match up with the in-game UI as much as possible.

This allows you to remove an item by "update"ing the station and removing the line for that item. When you're done, type:

Code:
start notepad updated.prices

See how that's full of lines with 0s? That's the alternative form, and it's a lot harder to match up visually against the in-game UI.

Key things to note:

- Only the stations listed in the .import are affected, so it *is* partial,
- Only the lines that appear in your modified .prices file are retained, that's where it is "concrete".

When TD imports a .prices file, it treats the data for each station it encounters as concrete/absolute.

If I have:

Code:
@ Sol/Alpha
  Algae 100 123 ...
  Food  300 312 ...

@ Sol/Bravo
  Algae 150 0 ...
  Biscuits 1200 1234 ...
  Haddock 1 2 ...

and I import

Code:
@ Sol/Bravo
  Algae 150 0 ...
  Biscuits 1200 1234 ...

@ Sol/Charlie
  Biscuits 1500 0 ...

The "Alpha" entries will be untouched, the station wasn't listed. The "Haddock" error for "Bravo" is removed and "Charlie" is populated.

Mad's system uses a different approach, by necessity.

In your case, I would:

. pull maddavo,
. upload my ocr,
. import to td

I'm going to change the "madupload" script to require that you specify a file.
 
BTW, maddavo's database is broken again as Neil also noted above.
Somebody know how to nudge him?

Recently this thing is broken whenever I want to play. :(
Maybe we need a backup database, let's call it sanedavo.
 
I definitely output whole stations into the files.

The query looks for the latest timestamp at a station and then considers that as the station timestamp. The station timestamp is then used to filter the stations needed in the 3h and 2d files. Then the prices for those stations are dumped. Sounds complex, but it's all one big query and only takes a second or two to run AND write the 3h files. The 2d file takes 19 seconds.

The full price file is an easier query, but takes about 4m20s to write the file.

Are you using a database backend yet? If so, what I recommend you do is capture the query and do the SQL command:

Code:
EXPLAIN QUERY PLAN
SELECT ...

and look at whether there are any places where an index would help. When dealing with timestamps, it can occasionally (fairly rare) help to put a 'DESC' on the timestamp component of an index, but it's often worth trying out.
 
Completely understand now, thank you.

I am being 'greedy' in terms of wanting updates without losing my own. There's a bunch of us trying to turn a system from Independent to Alliance Control. We're all using various tools to upload data while we visit different stations. So I'm trying to get their data (and they are trying to get mine) more frequently than is probably reasonable.

- - - - - Additional Content Posted / Auto Merge - - - - -

BTW, maddavo's database is broken again as Neil also noted above.
Somebody know how to nudge him?

What makes you say that?
 
What makes you say that?

Can't import.

Code:
trade.py import --plug=maddavo --opt=stncsv -i
Connecting to server: [URL]http://www.davek.com.au/td/station.asp[/URL]
data/Station.csv: 306600/306600 bytes | 126.65KB/s | 100.00% 
NOTE: Last download was ~3.03 days ago, downloading full file
Connecting to server: [URL]http://www.davek.com.au/td/prices.asp[/URL]
import.prices: 26952308/26952308 bytes |   1.87MB/s | 100.00% 
NOTE: Rebuilding cache file: this may take a moment.
Traceback (most recent call last):
  File "trade.py", line 102, in <module>
    main(sys.argv)
  File "trade.py", line 76, in main
    results = cmdenv.run(tdb)
  File "/Users/andreas/Documents/EliteData/tradedangerous/commands/commandenv.py", line 80, in run
    return self._cmd.run(results, self, tdb)
  File "/Users/andreas/Documents/EliteData/tradedangerous/commands/import_cmd.py", line 96, in run
    if not plugin.run():
  File "/Users/andreas/Documents/EliteData/tradedangerous/plugins/maddavo_plug.py", line 240, in run
    tdb.reloadCache()
  File "/Users/andreas/Documents/EliteData/tradedangerous/tradedb.py", line 607, in reloadCache
    cache.buildCache(self, self.tdenv)
  File "/Users/andreas/Documents/EliteData/tradedangerous/cache.py", line 998, in buildCache
    processImportFile(tdenv, tempDB, Path(importName), importTable)
  File "/Users/andreas/Documents/EliteData/tradedangerous/cache.py", line 885, in processImportFile
    for linein in csvin:
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/codecs.py", line 313, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 8111-8112: invalid continuation byte
 
Last edited:
Can't import.

Code:
trade.py import --plug=maddavo --opt=stncsv -i
Connecting to server: [URL]http://www.davek.com.au/td/station.asp[/URL]
data/Station.csv: 306600/306600 bytes | 126.65KB/s | 100.00% 
NOTE: Last download was ~3.03 days ago, downloading full file
Connecting to server: [URL]http://www.davek.com.au/td/prices.asp[/URL]
import.prices: 26952308/26952308 bytes |   1.87MB/s | 100.00% 
NOTE: Rebuilding cache file: this may take a moment.
Traceback (most recent call last):
  File "trade.py", line 102, in <module>
    main(sys.argv)
  File "trade.py", line 76, in main
    results = cmdenv.run(tdb)
  File "/Users/andreas/Documents/EliteData/tradedangerous/commands/commandenv.py", line 80, in run
    return self._cmd.run(results, self, tdb)
  File "/Users/andreas/Documents/EliteData/tradedangerous/commands/import_cmd.py", line 96, in run
    if not plugin.run():
  File "/Users/andreas/Documents/EliteData/tradedangerous/plugins/maddavo_plug.py", line 240, in run
    tdb.reloadCache()
  File "/Users/andreas/Documents/EliteData/tradedangerous/tradedb.py", line 607, in reloadCache
    cache.buildCache(self, self.tdenv)
  File "/Users/andreas/Documents/EliteData/tradedangerous/cache.py", line 998, in buildCache
    processImportFile(tdenv, tempDB, Path(importName), importTable)
  File "/Users/andreas/Documents/EliteData/tradedangerous/cache.py", line 885, in processImportFile
    for linein in csvin:
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/codecs.py", line 313, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 8111-8112: invalid continuation byte

Latest code gives you a better error when this happens, and it seems like Mad has fixed the problem in the mean time.
 
I am in a different tz (+11).

I'll look into it now .

No problem. I'm headed to bed anyway now. GMT+1
I didn't want to be rude, I just wish the whole thing wouldn't break that easily.

Maybe I can take some precautions on my end, like

tar czvf data.tar.gz data/

before I import. Like this I can revert to a slightly outdated database that still works if something breaks?

Many thanks.
 
No problem. I'm headed to bed anyway now. GMT+1
I didn't want to be rude, I just wish the whole thing wouldn't break that easily.

Maybe I can take some precautions on my end, like

tar czvf data.tar.gz data/

before I import. Like this I can revert to a slightly outdated database that still works if something breaks?

Many thanks.

The import was failing, it was not importing the bad data. The only thing you could not do was import the bad data.

Code:
trade.py import --plug=maddavo 
<bad error>
trade.py run --from sol ... 
<still works, has old data>

It downloads the data to "import.prices". Then it tries to import that.

If that succeeds, it renames your data/TradeDangerous.prices to data/TradeDangerous.prev and then it generates a new TradeDangerous.prices.

So ... you're already pretty well covered :)
 
Last edited:
I think the main issue was with the Station.csv file - it had non-UTF8 character station names in there. I didn't know how to filter them. I do now. I've updated the processing so it should filter them out from now on.
 
Code:
v6.8.5 Feb 04 2015
[b]. (kfsone) Added "trade" command to list station-to-station trades,[/b]
. (kfsone) "station" command now lists 5 sell and 5 buy items,
. (kfsone) "station" does a better job of parsing existing station names,
. (kfsone) Removed "--system" from "station" command, use
       trade.py station -a "SYSTEM/Station"
       syntax instead.
. (OpenSS) Added "Run To" option to trade.bat,
. (kfsone) Speed improvements to .csv and .prices loading,
. (kfsone) maddavo plugin tries to explain encoding errors,
. (chmarr) Fixed trade.py using the wrong argument list (#herp),
. (kfsone) titleFixup will handle 'von', 'de' and 'du' correctly,
+ Station/Rare Data (RavenDT)
 
Back
Top Bottom