In-Development TradeDangerous: power-user trade optimizer

It was and I can read it, but how did you come up with it? That's the clever part.

Fortunately, space-time and special relativity only really need pythagoras, so I can indulge my penchant for physics - though general relativity is a real bugbear.
 
Well, it all started when I realized that the sigmoid function '-x/(1+abs(x))' creates a curve that ranges from -1 to +1. The rest is just translations and scaling.

As far as the curve, that is, the third piece, I wanted something that smoothly transitioned from +1 to 0 fairly quickly in the x=0 to 4 range. '1/(x^(x/4))' got me the curve, and then, again, the rest was just a matter of translation and scaling.

Admittedly, it took a lot of fiddling on the sigmoids to get them to drop quickly enough to be worthwhile, but it was mostly just tinkering with the translation and scaling values.

Putting them together was the easy part. I just had to make it so that the sigmoid with the inflection at x=2 added 0<=y<=0.5 to the main curve, the main curve went between -0.5<=y<=0, and the sigmoid with inflection at x=4 add -0.5<=y<=0 to the main curve, resulting in a curve that went between -1<=y<=0.5, which, when added to 1, results in the final multiplier having a range between 0<=y<=1.5.
 
Last edited:
Something is off... Not sure what or how.

I was looking for sell (trade.py sell) prices (in this case platinum) for some cans I picked up from a wreck on planet. This included the line:

Code:
Station                                           Cost DistLy Age/days      StnLs B/mkt Pad Plt
-----------------------------------------------------------------------------------------------
Koli Discii/MacLean Terminal                    13,272   0.00     4.23        87K   Yes Lrg  No

This is odd because I was right there, for the second time today. From the logfile[1], I can see the events
Code:
Market update for KOLI DISCII/MACLEAN TERMINAL finished in 0.454 seconds.
I can also confirm the price of platinum at this station is 14,123.
Other EDDN attached websites have the current price too, so the update went into EDDN.


The server prices file shows

Code:
@ KOLI DISCII/MacLean Terminal
   + Chemicals
      Explosives                        576       0     18869M         -  2018-06-15 03:05:00
      Etc

Which matches the 4 days gap. So it appears as if the listener did not write this event (presumably also others) to the database.

Eyeonus - This might well explain the wierdness we had when testing Mark's run. Doing a clean imported correct data from EDDB, and filled in errors/missing data from the listener.
Given you had also been running the listener locally and didn't have the problem, it occurred to me that the server database might be correct and indeed, running the same query directly upon the server gave:
Code:
Station                                           Cost DistLy Age/days      StnLs B/mkt Pad Plt
-----------------------------------------------------------------------------------------------
Koli Discii/MacLean Terminal                    14,123   0.00     0.01        87K   Yes Lrg  No

Which indicates that the information did indeed go into the server's TD database after all, but presumably NOT out and into listings-live.csv for download to my local PC.


[1] Periodic timestamps would be really helpful for tracking events. Every 5 mins or something have the listener write out a timestamp.
 
Last edited:
Something is off... Not sure what or how.

I was looking for sell (trade.py sell) prices (in this case platinum) for some cans I picked up from a wreck on planet. This included the line:

Code:
Station                                           Cost DistLy Age/days      StnLs B/mkt Pad Plt
-----------------------------------------------------------------------------------------------
Koli Discii/MacLean Terminal                    13,272   0.00     4.23        87K   Yes Lrg  No

This is odd because I was right there, for the second time today. From the logfile[1], I can see the events
Code:
Market update for KOLI DISCII/MACLEAN TERMINAL finished in 0.454 seconds.
I can also confirm the price of platinum at this station is 14,123.
Other EDDN attached websites have the current price too, so the update went into EDDN.


The server prices file shows

Code:
@ KOLI DISCII/MacLean Terminal
   + Chemicals
      Explosives                        576       0     18869M         -  2018-06-15 03:05:00
      Etc

Which matches the 4 days gap. So it appears as if the listener did not write this event (presumably also others) to the database.

Eyeonus - This might well explain the wierdness we had when testing Mark's run. Doing a clean imported correct data from EDDB, and filled in errors/missing data from the listener.
Given you had also been running the listener locally and didn't have the problem, it occurred to me that the server database might be correct and indeed, running the same query directly upon the server gave:
Code:
Station                                           Cost DistLy Age/days      StnLs B/mkt Pad Plt
-----------------------------------------------------------------------------------------------
Koli Discii/MacLean Terminal                    14,123   0.00     0.01        87K   Yes Lrg  No

Which indicates that the information did indeed go into the server's TD database after all, but presumably NOT out and into listings-live.csv for download to my local PC.


[1] Periodic timestamps would be really helpful for tracking events. Every 5 mins or something have the listener write out a timestamp.

There does appear to be a gap when using EDDBlink via the listener. You start the listener and it checks for an update and processes it if it finds one. During this time it does not appear to receive any messages from EDDB. Once the update has been processed the listener part starts and from there on processes the messages as they arrive, but the possibly large number of updates between the start and end of the update processing are missed, or so it seems.

I've not run it long enough to see if the listener stacks or ignores messages during any subsequent updates either from listings or listings-live.
 
Last edited:
There does appear to be a gap when using EDDBlink via the listener. You start the listener and it checks for an update and processes it if it find one. During this time it does not appear to receive any messages from EDDB. Once the update has been processed the listener part starts and from there on processes the messages as they arrive, but the possibly large number of updates between the start and end of the update processing are missed, or so it seems.

I've not run it long enough to see if the listener stacks or ignores messages during any subsequent updates either from listings or listings-live.

It seems to stop for things, but I don't believe it does. It pauses, but carries on listening. Bear in mind it's multithreaded, the listener carries on just fine when the DB is busy with stuff. Clearly it did NOT ignore the message, as it was correctly entered into the server database. What doesn't appear to have happened is that the information got written out into the files for download. It's only going to be an issue on server side.
 
Last edited:
It seems to stop for things, but I don't believe it does. It pauses, but carries on listening. Bear in mind it's multithreaded, the listener carries on just fine when the DB is busy with stuff. Clearly it did NOT ignore the message, as it was correctly entered into the server database. What doesn't appear to have happened is that the information got written out into the files for download. It's only going to be an issue on server side.

Given that the message processing includes a print out on the console to show that the message has been processed and I'm not seeing those message during the update, I wonder if what you suggest is occurring.

If the messages were processed I'd expect the console output to be displayed.
If the messages were queued I'd expect the console output to suddenly display a lot of messages quite fast after the update has processed.

I'm not being either of those. Once the update processing has finished, the output messages start normally. At slow times there can be a delay of a minute or so before the output starts appearing on the console.

Still, I'll probably be switching to the EDDBlink only once I have the TD Helper done and just run the update db function every so often.
 
Given that the message processing includes a print out on the console to show that the message has been processed and I'm not seeing those message during the update, I wonder if what you suggest is occurring.

In the case of my issue, the aforementioned message clearly appeared on my console (or as it's redirected, in the log file). Futhermore, additional testing showed that the information was in the server database and can only have been put there by the listener as said information was only minutes old.
So I am suggesting that whatever went wrong in my case had nothing whatever do to with a missed message and everything to do with what it outputted into the listings-live.csv file, which is only an issue on server side as client side doesn't create that file.

Now, if you believe you have found a wholly separate problem with missed messages, by all means do some testing and show your results so that Eyeonus can have a crack at fixing it, but it's not the problem I'm reporting.
 
I have come across a problem with TD that someone may be able to help solve. I tried what I thought would be a simple route only to find that TD seems to stop working.

I've reproduced this on a different set of parameters.

Code:
[elite@mort tradedangerous.server]$ cat .tdrc_run
-v
--pla=YN?
--credits=50000000
--capacity=150
--ly-per=29.09
--empty-ly=33.31
--insurance=15000000
--progress
--pad-size=L

Code:
[elite@mort tradedangerous.server]$ ./trade.py run --from fujin --to fujin --hops 4 --age 2
* Hop   1: .........1 origins
* Hop   2: .......292 origins
* Hop   3: .....1,437 origins
[==================       ] ^CTraceback (most recent call last):
  File "./trade.py", line 104, in <module>
    main(sys.argv)
  File "./trade.py", line 77, in main
    results = cmdenv.run(tdb)
  File "/home/elite/tradedangerous.server/commands/commandenv.py", line 81, in run
    return self._cmd.run(results, self, tdb)
  File "/home/elite/tradedangerous.server/commands/run_cmd.py", line 1220, in run
    newRoutes = calc.getBestHops(routes, restrictTo=restrictTo)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 909, in getBestHops
    trade = fitFunction(items, startCr, capacity, maxUnits)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 700, in fastFit
    return _fitCombos(0, credits, capacity)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 676, in _fitCombos
    subLoad = _fitCombos(iNo+1, crLeft, capLeft)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 676, in _fitCombos
    subLoad = _fitCombos(iNo+1, crLeft, capLeft)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 676, in _fitCombos
    subLoad = _fitCombos(iNo+1, crLeft, capLeft)
  [Previous line repeated 7 more times]
  File "/home/elite/tradedangerous.server/tradecalc.py", line 637, in _fitCombos
    for iNo in range(offset, len(items)):
KeyboardInterrupt

And can you guess what happened when I added in a supply term?

Code:
[elite@mort tradedangerous.server]$ ./trade.py run --from fujin --to fujin --hops 4 --age 2 --supply=20
* Hop   1: .........1 origins
* Hop   2: .......289 origins
* Hop   3: .....1,433 origins
NOTE: Pruned 2361 origins too far from any end stations
* Hop   4: .......375 origins
Fujin/Futen Spaceport -> Fujin/Futen Spaceport (score: 1694090.676000)
  Load from Fujin/Futen Spaceport: 150 x Foods/Coffee (@1041cr),
  Dock at Capo/McCaffrey City
  Load from Capo/McCaffrey City: 150 x Medicines/Basic Medicines (@288cr),
  Dock at Fujin/Futen Spaceport
  Load from Fujin/Futen Spaceport: 150 x Foods/Coffee (@1041cr),
  Dock at Capo/McCaffrey City
  Load from Capo/McCaffrey City: 150 x Medicines/Basic Medicines (@288cr),
  Dock at Fujin/Futen Spaceport
  Finish Fujin/Futen Spaceport + 1,693,500cr (2,822cr/ton)=> 51,693,500cr

To further test, I changed to supply=1 and produced the error case. The idea of putting a default supply=1 was mentioned somewhere (might have been when I was chatting with Eyeonus) but clearly that isn't a fix. Also we can't reasonably put a higher default, as that would impact users with very small ships[1].
On the other hand, if I tell TD I only have 1 ton of space (without a supply term), it gives me a perfectly good route, but for just the one ton.

[1] I find it hard to believe people are using trade tools for their 1 ton of cargo room, but anyway.
 
Last edited:
There does appear to be a gap when using EDDBlink via the listener. You start the listener and it checks for an update and processes it if it finds one. During this time it does not appear to receive any messages from EDDB.

The listener is always receiving messages, it just pauses processing them (i.e., pushing the data into the DB) when the DB is locked due to the update running, which is why you don't see the "Market update ..." messages until it's done.

Once the update has been processed the listener part starts and from there on processes the messages as they arrive, but the possibly large number of updates between the start and end of the update processing are missed, or so it seems.

I've not run it long enough to see if the listener stacks or ignores messages during any subsequent updates either from listings or listings-live.

No, the listener never stops as long as the program is running. When the update is ongoing, the listings exporter and message processor pause until the update is complete.

If you actually watch the listener during an update, you'll notice a flood of those messages occurring as soon as the update finishes. This is the message processor catching up to the queue, which was constantly growing while waiting for the update to finish.

The listener and the processor are on two different threads: the listener listens for messages from EDDN and puts them into a queue. That's all it does. The message processor takes the messages out of the queue FIFO-style, processes them into the format the DB uses, and injects them into the DB.

Detailed explanation:
The EDDBlink-listener program consists runs either three or four separate threads:
1) The actual listener, which is started as soon as the startup process is complete.
This is the thread that listens for messages and adds them to the queue.

2) The update checker, which is started right after the listener.
This is the method that runs the EDDBlink plugin when it detects an update to the EDDB dump have occurred.
Before it starts the updates, it signals that it needs the DB: "EDDB update available, waiting for busy signal acknowledgement before proceeding.".
It then waits for the listings exporter and message processor to signal they got the signal and are waiting for the update checker to complete, and then runs the update.
When it's finished, it signals completion to the exporter and processor, and they both unpause.

A note on the updating:
The EDDBlink plugin actually does the updating, all the update checker does if see if there's an update available and if so calls the plugin.
When the EDDBlink plugin runs, if the data from the EDDB listings is newer than the DB data, it updates the data, setting the "from_live" flag to 0.
If the data from the EDDB listings is the same age as the data in the DB, meaning the live data from the day before has made it to the latest dump, it leaves the data alone but sets its "from_live" flag to 0.
If the data from the EDDB listings is older than the DB data, it skips that data and doesn't do anything to the data in the DB.

3) The listings exporter, which is started 5 seconds after the update checker in order to give the checker enough time to check if it needs to update immediately.
This is not run when the listener is running as a client. In that case, it "permanently" (i.e. as long as the program is running) turns on the busy signal acknowledgement and shuts itself down.
When it's not currently active and gets a busy signal from the update checker, it acknowledges it, "Listings exporter acknowledging busy signal.", and pauses itself until it gets the no-longer-busy signal, "Busy signal off, listings exporter resuming."
When it begins exporting the listings, it sends a signal to the message processor that it needs the DB, "Listings exporter sending busy signal."
It doesn't need to send one to the update checker, because the update checker will wait for acknowledgement from the exporter, and the exporter won't give that until it's done exporting.
Once it gets acknowledgement from the message processor, it grabs all the listings that have been updated since the last dump, i.e., all the listings that have a "from_live" value of 1.
Once it's gotten them, it relinquishes the DB and turns off its busy signal, allowing the message processor to resume.
It then exports all the listings it got to the live listings file.

4) The message processor, which is started immediately after the listings exporter.
This is the method that actually puts the messages from the EDDN into the database.
If it receives a busy signal from either the update checker or the listings exporter, it pauses, "Message processor acknowledging busy signal."
When the busy signal(s) are turned off, it resumes from where it left off, "Busy signal off, message processor resuming."
When it is active, it pulls the first message from the queue being built up by the listener, does some processing, and inserts it into the DB, setting the "from_live" flag for each entry it inserts to 1.
If there are still messages in the queue, it immediately proceeds to process the next message.
If there are no messages in the queue remaining, it tells the DB to commit the changes it has made.
 
On that, I decided to go with a different formula that trends to y=1 (, rather than y=0 as does the formula of mine Avi posted,) at lower penalty values, to more closely match ksfone's intention:

https://goo.gl/sn1PqQ

The red line is at a penalty of 100%, the black at a penalty of 0%, the darker red lines are 25%, 50%, and 75%. The other colors are the three pieces of the final penalty formula which is in teal.

As you can see from the WolframAlpha limit result, the formula never goes below 0.

In general looks nice, but I found two issues. Firstly, Python exponentiation uses **, not ^. I sent you a PR to fix that. Secondly, one of my test cases overflowed. See https://github.com/eyeonus/Trade-Dangerous/issues/6. May be simple enough to go to logs, but I have to go to work now. Thanks!
 
In general looks nice, but I found two issues. Firstly, Python exponentiation uses **, not ^. I sent you a PR to fix that. Secondly, one of my test cases overflowed. See https://github.com/eyeonus/Trade-Dangerous/issues/6. May be simple enough to go to logs, but I have to go to work now. Thanks!

Oops.

Good thing no one is using this yet, right?

I've reproduced this on a different set of parameters.

Code:
[elite@mort tradedangerous.server]$ cat .tdrc_run
-v
--pla=YN?
--credits=50000000
--capacity=150
--ly-per=29.09
--empty-ly=33.31
--insurance=15000000
--progress
--pad-size=L

Code:
[elite@mort tradedangerous.server]$ ./trade.py run --from fujin --to fujin --hops 4 --age 2
* Hop   1: .........1 origins
* Hop   2: .......292 origins
* Hop   3: .....1,437 origins
[==================       ] ^CTraceback (most recent call last):
  File "./trade.py", line 104, in <module>
    main(sys.argv)
  File "./trade.py", line 77, in main
    results = cmdenv.run(tdb)
  File "/home/elite/tradedangerous.server/commands/commandenv.py", line 81, in run
    return self._cmd.run(results, self, tdb)
  File "/home/elite/tradedangerous.server/commands/run_cmd.py", line 1220, in run
    newRoutes = calc.getBestHops(routes, restrictTo=restrictTo)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 909, in getBestHops
    trade = fitFunction(items, startCr, capacity, maxUnits)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 700, in fastFit
    return _fitCombos(0, credits, capacity)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 676, in _fitCombos
    subLoad = _fitCombos(iNo+1, crLeft, capLeft)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 676, in _fitCombos
    subLoad = _fitCombos(iNo+1, crLeft, capLeft)
  File "/home/elite/tradedangerous.server/tradecalc.py", line 676, in _fitCombos
    subLoad = _fitCombos(iNo+1, crLeft, capLeft)
  [Previous line repeated 7 more times]
  File "/home/elite/tradedangerous.server/tradecalc.py", line 637, in _fitCombos
    for iNo in range(offset, len(items)):
KeyboardInterrupt

And can you guess what happened when I added in a supply term?

Code:
[elite@mort tradedangerous.server]$ ./trade.py run --from fujin --to fujin --hops 4 --age 2 --supply=20
* Hop   1: .........1 origins
* Hop   2: .......289 origins
* Hop   3: .....1,433 origins
NOTE: Pruned 2361 origins too far from any end stations
* Hop   4: .......375 origins
Fujin/Futen Spaceport -> Fujin/Futen Spaceport (score: 1694090.676000)
  Load from Fujin/Futen Spaceport: 150 x Foods/Coffee (@1041cr),
  Dock at Capo/McCaffrey City
  Load from Capo/McCaffrey City: 150 x Medicines/Basic Medicines (@288cr),
  Dock at Fujin/Futen Spaceport
  Load from Fujin/Futen Spaceport: 150 x Foods/Coffee (@1041cr),
  Dock at Capo/McCaffrey City
  Load from Capo/McCaffrey City: 150 x Medicines/Basic Medicines (@288cr),
  Dock at Fujin/Futen Spaceport
  Finish Fujin/Futen Spaceport + 1,693,500cr (2,822cr/ton)=> 51,693,500cr

To further test, I changed to supply=1 and produced the error case. The idea of putting a default supply=1 was mentioned somewhere (might have been when I was chatting with Eyeonus) but clearly that isn't a fix. Also we can't reasonably put a higher default, as that would impact users with very small ships[1].
On the other hand, if I tell TD I only have 1 ton of space (without a supply term), it gives me a perfectly good route, but for just the one ton.

[1] I find it hard to believe people are using trade tools for their 1 ton of cargo room, but anyway.

Okay, in the hopes that the data hasn't updated to the point where that error isn't occurring anymore, can you copy the whole TD over to a new location for testing. We'll need a static dataset to run this problem down.

Then, if you can, run the command again with the supply parameter. We need to at least find out which stations it's being slow on, as the one that it finally freezes on in the error case is always one of the ones it slows down on, and see what, if anything, is common among those stations.

It would also be nice to find out what supply values do and do not produce the error case, as well as if any other parameters successfully avoid the freezing issue, but that's secondary and possibly a red herring anyway.

If you can get me a copy of the test TD folder, that would be helpful and allow me to look at this directly, as well.
 
In the case of my issue, the aforementioned message clearly appeared on my console (or as it's redirected, in the log file). Futhermore, additional testing showed that the information was in the server database and can only have been put there by the listener as said information was only minutes old.
So I am suggesting that whatever went wrong in my case had nothing whatever do to with a missed message and everything to do with what it outputted into the listings-live.csv file, which is only an issue on server side as client side doesn't create that file.

Now, if you believe you have found a wholly separate problem with missed messages, by all means do some testing and show your results so that Eyeonus can have a crack at fixing it, but it's not the problem I'm reporting.

The listener is always receiving messages, it just pauses processing them (i.e., pushing the data into the DB) when the DB is locked due to the update running, which is why you don't see the "Market update ..." messages until it's done.



No, the listener never stops as long as the program is running. When the update is ongoing, the listings exporter and message processor pause until the update is complete.

If you actually watch the listener during an update, you'll notice a flood of those messages occurring as soon as the update finishes. This is the message processor catching up to the queue, which was constantly growing while waiting for the update to finish.

The listener and the processor are on two different threads: the listener listens for messages from EDDN and puts them into a queue. That's all it does. The message processor takes the messages out of the queue FIFO-style, processes them into the format the DB uses, and injects them into the DB.

Detailed explanation:
The EDDBlink-listener program consists runs either three or four separate threads:
1) The actual listener, which is started as soon as the startup process is complete.
This is the thread that listens for messages and adds them to the queue.

2) The update checker, which is started right after the listener.
This is the method that runs the EDDBlink plugin when it detects an update to the EDDB dump have occurred.
Before it starts the updates, it signals that it needs the DB: "EDDB update available, waiting for busy signal acknowledgement before proceeding.".
It then waits for the listings exporter and message processor to signal they got the signal and are waiting for the update checker to complete, and then runs the update.
When it's finished, it signals completion to the exporter and processor, and they both unpause.

A note on the updating:
The EDDBlink plugin actually does the updating, all the update checker does if see if there's an update available and if so calls the plugin.
When the EDDBlink plugin runs, if the data from the EDDB listings is newer than the DB data, it updates the data, setting the "from_live" flag to 0.
If the data from the EDDB listings is the same age as the data in the DB, meaning the live data from the day before has made it to the latest dump, it leaves the data alone but sets its "from_live" flag to 0.
If the data from the EDDB listings is older than the DB data, it skips that data and doesn't do anything to the data in the DB.

3) The listings exporter, which is started 5 seconds after the update checker in order to give the checker enough time to check if it needs to update immediately.
This is not run when the listener is running as a client. In that case, it "permanently" (i.e. as long as the program is running) turns on the busy signal acknowledgement and shuts itself down.
When it's not currently active and gets a busy signal from the update checker, it acknowledges it, "Listings exporter acknowledging busy signal.", and pauses itself until it gets the no-longer-busy signal, "Busy signal off, listings exporter resuming."
When it begins exporting the listings, it sends a signal to the message processor that it needs the DB, "Listings exporter sending busy signal."
It doesn't need to send one to the update checker, because the update checker will wait for acknowledgement from the exporter, and the exporter won't give that until it's done exporting.
Once it gets acknowledgement from the message processor, it grabs all the listings that have been updated since the last dump, i.e., all the listings that have a "from_live" value of 1.
Once it's gotten them, it relinquishes the DB and turns off its busy signal, allowing the message processor to resume.
It then exports all the listings it got to the live listings file.

4) The message processor, which is started immediately after the listings exporter.
This is the method that actually puts the messages from the EDDN into the database.
If it receives a busy signal from either the update checker or the listings exporter, it pauses, "Message processor acknowledging busy signal."
When the busy signal(s) are turned off, it resumes from where it left off, "Busy signal off, message processor resuming."
When it is active, it pulls the first message from the queue being built up by the listener, does some processing, and inserts it into the DB, setting the "from_live" flag for each entry it inserts to 1.
If there are still messages in the queue, it immediately proceeds to process the next message.
If there are no messages in the queue remaining, it tells the DB to commit the changes it has made.

Many thanks to you both for clearing that up for me. That all makes complete sense.
 
Oops.

Good thing no one is using this yet, right?



Okay, in the hopes that the data hasn't updated to the point where that error isn't occurring anymore, can you copy the whole TD over to a new location for testing. We'll need a static dataset to run this problem down.

Then, if you can, run the command again with the supply parameter. We need to at least find out which stations it's being slow on, as the one that it finally freezes on in the error case is always one of the ones it slows down on, and see what, if anything, is common among those stations.

It would also be nice to find out what supply values do and do not produce the error case, as well as if any other parameters successfully avoid the freezing issue, but that's secondary and possibly a red herring anyway.

If you can get me a copy of the test TD folder, that would be helpful and allow me to look at this directly, as well.

Okay, first off let me prefix this post with a warning: I'm not a python programmer. However, I have been programming in one form or another for the last 48 years most of which have been as a professional programmer so I have an idea of what programming is all about. Sort of.

I've poked around in the TD code and by using print statements in lieu of using the debugging functionality of VS Code, which I've not figured out how to use as yet, I believe I have found out why we get the slow down in processing under certain conditions.

Firstly, the _fitCombos method in the fastFit method is recursive and is in the innermost loop of the getBestHops method and hence has a great effect on the processing time. The depth to which the recursion reaches depends largely on the number of trade items returned by the TradeCalc.getTrades function. If the number of items returned by this function exceed 20, the processing time elapsed by the _fitCombos method becomes large and it looks like it might be exponential and not geometric.

The effect of setting the supply parameter to something like 20 or the max price age to 1 reduces the number of trade items returned by the getTrades function and hence speeds up the processing time. Likewise, reducing the number of jumps reduce the number of origins and thus the number of routes that have trades in excess of 20 items.

One possible solution to this would be to transform the recursive method to a non-recursive method, I think I'm correct in saying that this is always possible, although this may not increase performance that much. Another possibility is to pre-process the items returned from the getTrades method somehow so that it does not contain more items than absolutely necessary. No idea how this should be done.

that's what I've found so far, I'm just going to do a few metrics to find out if the depth of recursion is linear or exponential or whatever, mainly because I want to know :)
 
Back
Top Bottom