In-Development TradeDangerous: power-user trade optimizer

Both of these are now fixed in 7.0.1.

trade option gives error!
Code:
Command line: -u "C:\Program Files Games\Frontier\TradeDangerous\repository\trade.py" trade "EJAGHAN/Malpighi Enterprise" "BHUTAS/Feynman Terminal" 
Traceback (most recent call last):
...
AttributeError: 'Station' object has no attribute 'tradingWith'
...
[Edit] market also gives same error

FYI, TD works hard to resolve names :)
Code:
trade.py trade eja/malpi bhutas/feyn

also works.

There's something wrong with latest pull on repository

Code:
File "/home/viinatim/tradedangerous/tradecalc.py", line 463, in __init__                                                                  
    weheres.append("(item_id IN ({}))]".format(loadItemIDs))

This is easy one. weheres > wheres, but the I got:

Whoops

Code:
Traceback (most recent call last):                                                                                                          
sqlite3.OperationalError: unrecognized token: "]"

Yeah, there was just a stray "]" from when I converted it from an array, apparently.


Run with test command:

Code:
python3 tradedangerous/trade.py run --from 'brani/noakes' --to 'brani/virtanen' --capacity 64 --credits 2374693 --insurance 639780 --ly-per 19.54 --avoid 'Personal Weapons,Narcotics,Slaves,Imperial Slaves' --hops 1  --jumps 1  -vvv -P

You can shorten most command line options, and "avoid" understands if you leave spaces out of things:

Code:
python3 tradedangerous/trade.py run --fr brani/noa --to brani/virt --cap 64 --cr 2374693 --ins 640k --ly 19.54 --avo personalw,narco,slaves,imperials' --hops 1  --jumps 1  -vvv -P

You might want to add "--summary" (--sum) for less clutter :)
 
Last edited:
Ubuntu 14.x Users:
Ubuntu 14.x doesn't have an official Python 3.4.2 or higher package, so you'll need to install by hand or using "pyenv". See http://askubuntu.com/questions/474108/ubuntu-14-04-lts-and-python-3-4-2

Ubuntu 15.x Users:
You can run this script https://bitbucket.org/kfsone/tradedangerous/downloads/ubuntu15-tradedangerous-setup.sh or you can read the instructions below:
Ubuntu comes with Python 2.7 by default and their Python 3.4 packages are non-standard (they don't include the package manager, pip).

It's not hard to fix, in a nutshell:

Code:
sudo apt-get install --upgrade python3
sudo apt-get install --upgrade pip

But the problem is that when you run Python like this it sometimes falls back to Python 2.7.

To the rescue: VirtualEnv. This is a small set of scripts that makes it so that you can "activate" Python 3 in a given console session.

Code:
sudo apt-get install virtualenv
cd ~/td    # or whever you put tradedangerous
virtualenv .venv $(type -p python3)

To activate the virtual environment you need to run the ".venv/bin/activate" script. Remember: it only lasts until the end of that console session or until you type "decativate".

You can tell it's active by the "(.venv)" at the start of subsequent prompt lines.

Code:
cd ~/td
. .venv/bin/activate

(thats dot, space, dot venv slash bin slash activate - there's a space between the two dots)

Once you've typed this and have the (.venv) prompt prefix, your "python" defaults to the Python 3 and it uses its own private packages directory inside the .venv directory. So:

Code:
cd ~/td
. .venv/bin/activate
python -m pip install --upgrade pip
pip install --upgrade setuptools requests
./trade.py help
 
Last edited:
Speaking as a windows user, I haven't run any trade routs yet, but the maddavo download is much faster due to its being zipped. Nicely done!
 
Hmm.. was "--stock" support deprecated in the Run command? It seems to have disappeared since 6.18.6.

In my announcement, the first post and the CHANGES.txt file:

Version 7.0.0 is up
...
Code:
v7.0.0 May 03 2015
...
. (kfsone) Consistency of various commands:
    - "demand" refers to what a station will buy,
    - "supply" refers to what a station is selling,
    - Changed several command options from "--stock" to "--supply",
    - "--black-market" is now consistently spelled with a hyphen
       (or just --black for short),
    - "--bm" now has two hyphens in all uses (it was -bm in some cases),
...
 
Last edited:
I have been hoping for a "--demand" filter too. Patiently waiting...

Waiting will get you old, asking for something on the issue tracker will get it considered.

It doesn't make much sense to me, not least because we actively discourage users from collecting the data. In my current local db less than half of the price entries have a demand level that's not 0 or -1.

Code:
v7.0.2 May 05 2015
. (kfsone) Added "--demand" option to "run", filters based on demand.
      CAUTION: Items with "?" demand will be ignored when using --demand.
      If you want to enter demand values in the update tool, you will
      need to use the "--use-demand" (-D) option of "update".
 
Hi,

Since 7.0.1 I get this error - I bolded the error. If I remove the --age it works, changing the value of --age doesn't seem to matter although I didn't try all values.
ps - I get the same with --max-days-old

$ trade.py run --from "SOKARIANG/Penzias City" --detail --progress --credits 30
000000 --capacity 240 --ly-per 0.1 --age 21 --hops 4 -vv
* Hop 1: .........1 origins
* Hop 2: .........1 origins .. 90,000cr gain, 375cr/ton
* Hop 3: .........3 origins .. 192,480-200,400cr gain, 401-417cr/ton
* Hop 4: .........3 origins .. 205,200-290,400cr gain, 285-403cr/ton
./trade.py: Error: No routes had reachable trading links at hop #4

$ trade.py run --from "SOKARIANG/Penzias City" --detail --progress --credits 30
000000 --capacity 240 --ly-per 12.54 --age 39 --hops 4 -vv
* Hop 1: .........1 origins
* Hop 2: ........20 origins .. 9,840-90,000cr gain, 41-375cr/ton
* Hop 3: ........71 origins .. 54,720-446,880cr gain, 114-931cr/ton
* Hop 4: .......135 origins .. 71,520-665,040cr gain, 99-923cr/ton
./trade.py: Error: No routes had reachable trading links at hop #4

$ trade.py run --from "SOKARIANG/Penzias City" --detail --progress --credits 30
000000 --capacity 240 --ly-per 12.54 --hops 4 -vv
* Hop 1: .........1 origins
* Hop 2: ........22 origins .. 9,840-90,000cr gain, 41-375cr/ton
* Hop 3: ........91 origins .. 56,880-446,880cr gain, 118-931cr/ton
* Hop 4: .......206 origins .. 124,320-677,760cr gain, 172-941cr/ton
SOKARIANG/Penzias City -> PANGILAGARA/Tange Vision (score: 1026014.817600)
Start CR: 30,000,000
Hops : 4
Jumps : 7
Gain CR : 1,027,680
Gain/Hop: 256,920
Final CR: 31,027,680
 
Last edited:
Has anyone tested if having numpy increases speed/reliability?
Here is one data point: No replication, so take it FWIW:

Code:
trade.py run -vvv --summary --ly 18.14 --cap 116 --credits 3m --insurance 550k --from SOL/M.G. --to ERAVATE/Ack --hops 4 --jumps 3

Took 1:35.6 using windows python 3.4 vanilla and 1:27.8 with numpy (1.9.2 for 64 bit and MKL).

A proper test would require replication and a larger suite, but I'm thinking that you don't have much matrix algebra or vectorized calculations in a knapsack/edge maximization problem anyway as the value at each node is completely dependent on the prior nodes and edges.
 
Has anyone tested if having numpy increases speed/reliability?

It's not used yet, there's just a stub there to people experiment with implementations, and you have to set the "NUMPY" environment variable for it to even enable that:

Code:
$ NUMPY=1 trade.py <something>

or

Code:
$ ipython
In [1]: import os ; os.environ["NUMPY"] = str("1")

In [2]: import tradedb ; tdb = tradedb.TradeDB()

In [3]: sol = tdb.lookupPlace("sol")

In [4]: sol.pos
Out[4]: array([ 0.,  0.,  0.], dtype=float32)

So far I've only experimented using it for star positions as a way to optimize the neighbor search, but I haven't made enough gains to warrant sticking with any of my research.

For instance, I tried replacing "stellar grid" (a crude faux-scenegraph) with scipy's kdtree. It added several seconds to start up time but - in tests - it made querying "get me the stars within X ly of [a,b,c]" really, really fast.

But that was using a random distribution of points. Our data is clumped.

It was still fast, but then I realized you have to convert the result set back to an index list, and that cost about 1/3rd of the speed gain. So - still 2/3rds of a speed gain is good?

Mapping the indexes back to actual objects lost me the rest of the speed.

I tried a couple of refactors of "getNeighbors" and "genStellarGrid", in particular I made gsg batch up as many cells as possible to distance check to maximize the amount of vectorization per pass through the numpy api.

The stellar grid code definitely has opportunities for optimization, but some of it I don't know how to do in Python (yet). For instance, in C++ I'd replace the "stellar key" with something like

Code:
class Key final
{
    union {
        class {
            int64_t p_zero : 1;
        public:
            int64_t m_x : 21;
            int64_t m_y : 21;
            int64_t m_z : 21;
        };
        uint64_t m_hash;
    };

public:
    enum { GridShift = 5; };

    constexpr Key() noexcept : m_hash(0ULL) {}
    constexpr Key(const Key& rhs_) noexcept : m_hash(rhs_.m_hash) {}
    constexpr Key(float32 x_, float32_ y, float32_ z) noexcept
        : m_hash(0) 
        , m_x(int64_t(x_) >> GridShift)
        , m_y(int64_t(y_) >> GridShift)
        , m_z(int64_t(z_) >> GridShift)
    {}

    constexpr operator=(const Key& rhs_) noexcept { m_hash = rhs_.m_hash; }

    constexpr int64_t gridX() const noexcept { return m_x; }
    constexpr int64_t gridY() const noexcept { return m_y; }
    constexpr int64_t gridY() const noexcept { return m_z; }

    constexpr uint64_t hash() const noexcept { return m_hash; }
};

namespace std
{
    template<>
    hash<Key> {
        constexpr std::size_t operator()(const Key& k) const noexcept {
            if (sizeof(std::size_t) == 4)
                return k.hash ^ (k.hash() >> 38);
            else
                return k.hash();
        }
    }
}

plus a bounding box class with iterators, and you have pretty fast access to searching the grid and making a key out of coordinates.

- - - Updated - - -

Here is one data point: No replication, so take it FWIW:

Code:
trade.py run -vvv --summary --ly 18.14 --cap 116 --credits 3m --insurance 550k --from SOL/M.G. --to ERAVATE/Ack --hops 4 --jumps 3

Took 1:35.6 using windows python 3.4 vanilla and 1:27.8 with numpy (1.9.2 for 64 bit and MKL).

A proper test would require replication and a larger suite, but I'm thinking that you don't have much matrix algebra or vectorized calculations in a knapsack/edge maximization problem anyway as the value at each node is completely dependent on the prior nodes and edges.

I recommend you grab Visual Studio 2015 Ultimate RC while it's free and install "PyTools for Visual Studio" - excellent perf reporting of Python from such a strange quarter.
 
Yoiks. The only Python I know costs about 56M credits in Elite:Dangerous :eek:

OK, I know a little more than that, but nowhere near what you do.

While we're on the topic, why did you write TD in Python instead of C++, if I may ask?

Thanks!
 
Yoiks. The only Python I know costs about 56M credits in Elite:Dangerous :eek:

OK, I know a little more than that, but nowhere near what you do.

While we're on the topic, why did you write TD in Python instead of C++, if I may ask?

Thanks!

So C and C++ are my business tools, I used to be a perl hacker, but while I was working at Bliz I'd stumbled across Python's repl and Idle and fantastic command-line accessibility to assorted tools, and at my new gig they use a lot of Python for automation stuff, so I thought: what the heck. It let me focus on solving the immediate problems during beta without having to worry so much about boiler plate. I'd also totally forgotten about the GIL (the thing that makes Python basically single threaded).

Part of the reason it lurks here in this particular forum is that I mostly wanted to just expose it to the kinds of folks who might already have Python installed and would relish to opportunity to either use existing command-line commands or quickly open a python prompt and do their own thing. The API has been wobbly, there are somethings it's trivial to do and some things it's a little hard to eek out, but I don't get enough feedback to know where to tweak that :)

I've been disappointed by Python's GUI support - there seem to be some unique little snowflakes that give it power in specific conditions (I used it to write a gui for testing the wow mobile chat api when I was making changes to how chat worked in-game) - but none that extend well to more than one platform without placing large hurdles infront of users who didn't sign on for installing C compilers to build packages to host a Python script...

If I'd written it in C++ there'd probably be no GUI element at all, there's xplat stuff that's fun to work on and there's xplat stuff that suck's a donkeys nads in hell... GUIs being one of them. And pre++14 unicode being another.

I do have a "tdplus" project, which is intended to be a sort of TD backend service written in C++ (on some systems TD takes 30s to start up which makes it a bit of a shonky command line tool and some of the things I could do to speed it up would make that worse; a long-running process can do things with the dataset that would make it slow to start up but fast to respond to queries).

But I've also been pondering whether I would redo TD In C#, especially now that .NET is open source. The perf would be better than Python, there's a common GUI, and I've written a few small things in it (https://bitbucket.org/kfsone/houseomatic).
 
It is smoking on Windows, but breaks TD / python3 on my MAC. More testing tomorrow...yawn.....
oh yes, --age still broke with NUMPY=1

Issues that get reported on the issue tracker (http://kfs.org/td/issues) tend to get fixed faster. That said, I fixed the numpy issue earlier today and I just pushed my fix for --age now.

Code:
$ python -c "import numpy"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named 'numpy'
(.venv)
osmith@WOTSIT /c/dev/trade (master)
$ tdrun lave/lave --hops 1  --age 5
$ /c/dev/trade/trade.py run -vv --progress --ly=50 --empty=82 --cap=212 --jumps=1 --cr=2153796 --prune-score=40 --prune-hops=3 --from="lave/lave" --hops 1 --age 5
* Hop   1: .........1 origins
LAVE/Lave Station -> ARRO KOP/Shaikh Hub (score: 121242.766080)
  Load from LAVE/Lave Station (295ls, BMk:Y, Pad:L, Shp:Y, Out:Y, Ref:Y):
      212 x Tobacco    4,926cr vs    5,499cr, 13 hrs vs 42 hrs
  Jump LAVE -> ARRO KOP
  Unload at ARRO KOP/Shaikh Hub (1.67Kls, Pad:M) => Gain 121,476cr (573cr/ton) => 2,275,272cr
  ----------------------------------------------------------------------------
Finish at ARRO KOP/Shaikh Hub (1.67Kls, Pad:M) gaining 121,476cr (573cr/ton) => est 2,275,272cr total
 
It is smoking on Windows

It's actually not numpy that's doing that, there were four significant perf things I changed in 7.0.

1. Re-merge item data (buy/sell in one table),
2. Put "buying" prices in a dictionary when comparing two stations (previously I was doing the O(log N^2) thing of walking two lists),
3. Stopped caching the results of #3,
4. Fixed a perf flaw in the knapsack,

#1 helps because it's just a pattern SQLite is not good at, there was a huge amount of cpu being spent performing joins using one approach or looking up indexes using a second approach, and adding an order by seems to force sqlite to push data at you as a whole rather than streaming.

#2 was an optimization I'd just not gotten around to,
A big chunk of the Windows speed up came from putting item prices back into one table (StationBuying + StationSelling -> StationItem) removed a huge bottle neck in the SQLite<->Python layer; my C++ td engine was taking forever loading the data and when I ran a profile on it with a debug sqlite library, it was almost entirely sqlite moving data around to perform joins and/or sort. Single table => much faster.

#3 Helped early on but as the dataset had reached the point where it just created memory pressure,

#4 The knapsack has an early-out when it fills the cargo hold, predicated on knowing it's testing items in (profit asc, cost desc) order. At one point I'd experimented with some different orderings and gotten good results with my test data, but I had to remove the early-out optimization. Then I tried it with real data and realized my test data was invalid. I took out the ordering but didn't put the early-out back. So the knapsack has been trying exhaustive combinations since back in 5.x.

Meanwhile my several experiments with numpy had a much bigger change footprint, haven't yet resulted in a significant end-to-end perf win and will require increasing the "extra libraries" overhead for some of our less techy users, worst, the real gain for us from numpy would be through an octree or kdtrees, and the best implementation is in scipy.spatial which can be a to install (and then there's all the blas/lapack stuff to install to get the high-end perf that would make it rock, it just gets nasty).

The key feature of numpy is vectorization, and that can be leveraged if we store data the right way - for instance, refactor the buy/sell lists the right way and you could vectorize the few math operations we need doing there. The question being how much it will cost to marshal them into a usefully vectorizable layout.
 
Back
Top Bottom