In-Development TradeDangerous: power-user trade optimizer

Adding a simple line to make the waiting loop wait 1 second before jumping back to the top of loop fixes all that, as you can see in the latest commit log.

Keep up. I sent a PR 4 hours ago to do the exact same fix

(and thus claim all the props for testing, identifying and solving the problem :x :p)
 
Last edited:
Keep up. I sent a PR 3 hours ago to do the exact same fix

(and thus claim all the props for testing, identifying and solving the problem :x :p)

Well, I made the change, tested it, and pushed it to github before ever coming here to catch up on posts, and the only reason I didn't do it earlier when I had the idea was because I was at work, so I say "great minds" and to pish with your tosh. :D
 
Only because during my 4 hours of testing did I notice the descrepancy and test it both ways to prove the existence of the problem.
Then despite being a non-python programmer and having a hangover I came up with the fix.
We are scientists and engineers and I published first dammit!
*feels grumpy cos after all that work eyeonus steals his thunder*

I will finish up the testing on autocommit tomorrow.
 
I literally have no time to work on this, but I'm fairly certain that yes it is. IIRC, there's a few pragmas that are run in the cache.py file, and turning on foreign keys is one of them.

EDIT: Sorry looks like it's in tradedb.py:
Code:
    def getDB(self):
        if self.conn:
            return self.conn
        self.tdenv.DEBUG1("Connecting to DB")
        conn = sqlite3.connect(self.dbFilename)
        conn.execute("PRAGMA foreign_keys=ON")
        conn.create_function('dist2', 6, TradeDB.calculateDistance2)
        return conn

EDIT2: It's also turned on when (re)building the cache, which is done in the buildCache method in cache.py

Thanks for taking the time to have a look at this.

I wonder what is causing the FK exception? I have some time this week so I might try to dig into this a little deeper.
 
Only because during my 4 hours of testing did I notice the descrepancy and test it both ways to prove the existence of the problem.
Then despite being a non-python programmer and having a hangover I came up with the fix.
We are scientists and engineers and I published first dammit!
*feels grumpy cos after all that work eyeonus steals his thunder*

I will finish up the testing on autocommit tomorrow.

Sorry, I didn't mean to steal your thunder. You did amazing, what with being out of your comfort zone and all. You definitely deserve credit for that.
 
Only because during my 4 hours of testing did I notice the descrepancy and test it both ways to prove the existence of the problem.
Then despite being a non-python programmer and having a hangover I came up with the fix.
We are scientists and engineers and I published first dammit!
*feels grumpy cos after all that work eyeonus steals his thunder*

I will finish up the testing on autocommit tomorrow.
Hey, as the rubber duck in the room, I demand all the credit!
 
I have done some more testing.

Now that the server slowness is cured (mostly [1]) a different pattern emerges -
When the server is chugging happily along, servicing the EDDN queue as fast as information can come along, then generally my test case will update fine.
As soon as there is any kind of backing up of the queue, such as when generating the listings-live.csv file, then my test case will often fail to update.
Given our previous discussion on transactions and pysqlite doing things in the background, I suspect that what is happening is
- under low load pysqlite doesn't cause anything to go awry
- that because there are a number of INPUT/REPLACE statements to process, pysqlite is bundling them up, but encountering a problem we cannot easily debug and then rolling the whole transaction back.

Switching up to autocommit (isolation_level = none) on the database means that pysqlite's automagic transaction management is taken out of the equation and each INPUT or REPLACE is committed individually.
This gives a small hit on speed, but not enough to worry us. It may also be a kludge and we should be managing our own transactions more elegantly - but neither I nor eyeonus are that hot on SQL (or SQLite), so if anyone is up on that sort of thing, we'll happily code the appropriate python around some best practice SQL solution anyone might care to provide. In any event, making this change to the transaction style doesn't slow us down enough to fall behind the EDDN queue and is in the order of 100ths or at most 10ths of a second per complete market update - so no big deal in our use case.

Primarily what this change does, from my testing, is lose no data. I cannot make my test case fail, at all, using autocommit. So I've sent a PR and hopefully, finally, that is put to bed.

Requesting permission to stroke my own ego?


[1] It still has a brief hiccup immediately after exporting listings-live.csv
 
In other news - the server (or rather it's virtual machine) has now been migrated to a faster host. Currently runs on CentOS 6, but I've got a second VM with a basic Centos 7 install and I will be migrating all my services (including TD) at some point in the near future (when I get around to doing it) to the more up to date OS. The new VM also has more resources allocated, so altogether, newer, better, faster :D
 
Well - my fix is a fix, but apparently makes the listener run like an extremely ploppy turd when running on either eyeonus, or my PC. The server is running linux, (though ultimately it's a VM under HyperV) and may have a later greater version of the SQL library, but god knows. My server is cool with it, our desktops hate it.

Does anyone here know enough about SQLite to advise on transactions? Our current plan is to bundle each complete market into one transaction, so we commit one market at a time, instead of one commodity at a time.

Ironically, this means that if you don't bother with the listener and just use the plugin, you will get really good data atm.
 
Last edited:
Does anyone here know enough about SQLite to advise on transactions? Our current plan is to bundle each complete market into one transaction, so we commit one market at a time, instead of one commodity at a time.
I know nothing, but based on the links you posted, what about write-ahead logging and calling the checkpoint after each market?
 
I would do it like this:
  • build item list
  • start transaction
  • delete all items from station
  • insert all items with executemany
  • commit transaction

And maybe don't open and close the connection all the time. The WAL mode for the journal is also a good idea.
 
I would do it like this:
  • build item list
  • start transaction
  • delete all items from station
  • insert all items with executemany
  • commit transaction

And maybe don't open and close the connection all the time. The WAL mode for the journal is also a good idea.


I believe that eyeonus has already found that deleting either some or all of the items is significantly slower than the "insert or update on exception" method that he currently uses. I could be wrong about that.
 
Well - my fix is a fix, but apparently makes the listener run like an extremely ploppy turd when running on either eyeonus, or my PC. The server is running linux, (though ultimately it's a VM under HyperV) and may have a later greater version of the SQL library, but god knows. My server is cool with it, our desktops hate it.

Does anyone here know enough about SQLite to advise on transactions? Our current plan is to bundle each complete market into one transaction, so we commit one market at a time, instead of one commodity at a time.

Ironically, this means that if you don't bother with the listener and just use the plugin, you will get really good data atm.

SQLite runs in AutoCommit mode by default and starting a transaction just turns this off for the duration of the transaction. The net effect is that running a transaction for a small number of inserts or updates will not cause a noticeable slow down in the performance however stacking a lot of pending inserts or updates will require a significant amount of time to commit the transaction as AutoCommit then works through the entire stack of changes.

Having an open transaction for 3.4 million changes when importing the listings file, for example, takes 168 s on my development PC just to carry out the commit of the transaction. However, having a transaction for every insert or update separately significantly slows down the processing by many times more than 168 s and is not really the "ideal" way to use transactions.

It seems to me that creating a transaction for each market is a much better way to go. Think of the "atomic" part of ACID with regards to a transaction. Having a single transaction for the entire listings file is nonsense as it means either the entire import works or it fails entirely. Atomic for a single market is better, either the entire market is committed or just that market fails.

If you already knew this then ignore the post :)
 
I did some testing over the last few days on the updates to tradedangerous.db from listings.csv since I still think that using the exceptions as decision logic is flawed. I used .NET since that I what I know best and set up two tests, the first being the insert and update if an exception is thrown but the second did it a very different way. I imported the station and item id as a single value

Code:
select printf("%010d", station_id) || printf("%010d", item_id) as HashKey from StationItem

and read the values into a hash table. Then I read in the listings.csv file, creating a hash key from the station and item IDs in the data and if the key existed in the hash table I added the data to a list for updates and if it did not, the data was added to a second list for inserting and the hash key added to the hash table. After the entire file had been split in this maner I set the insert command string on the command object, called the prepare method and then ran through the insert list binding the data for each item in the list to the parameters and executing the command. Likewise the update list was iterated, the difference being the this time the prepared statement was an update.

I ran the two tests a number of times, around 20 or so, and averaged the results. The second, non-exception method took around 43% of the time that the insert exception update method took (747mS vs 1738mS).

I also did a quick test in the inserts by adding them in the form INSERT INTO StationItem VALUES (....),(....),(....) for the 12,527 inserts and then one execute. The execute many form was faster but the time difference was not really significant (1.4s vs 3.6s)

The listings file contained about 3.4 million rows of data and timings did not include the common elements, connecting to the database, reading in the data, just the method of inserting or updating the data from the listings file.

The increase in speed was mainly due, as eyeonus has already noted, to not using the horribly slow (in CPU terms) data access. The use of a single prepared SQL command and dealing with all the insert and updates in separate batches also cut down the execution time. The downside being that there is a hash table in memory instead.

Keeping the hash table at this point would also aid the listener since the same method for deciding to insert or update could be used instead of the exception method.

Still, the method works, does not use exceptions as decision logic and is significantly faster, although I don't know if the savings would translate to python since python does not have an equivalent hash table although a dictionary object may work instead.
 
Last edited:
SQLite runs in AutoCommit mode by default and starting a transaction just turns this off for the duration of the transaction.

Yes, normally. But contrary to that, the standard[1] python starts a transaction for you, every time you do any kind of DML statement and commits when you do any non-DML SQL, or manually call commit.

This automagic transaction management behaviour appears to be at the root of what causes us to lose stations. Maybe it is also automagically rolling the transaction back, but not telling us.

In any event, executing each line of SQL on its own in autocommit mode whilst slow, fixes the problem - so clearly we need to manage our own transactions and not rely on the pysqlite module to do it for us.

The net effect is that running a transaction for a small number of inserts or updates will not cause a noticeable slow down in the performance however stacking a lot of pending inserts or updates will require a significant amount of time to commit the transaction as AutoCommit then works through the entire stack of changes.

Which is what we see. Running all the INSERT/REPLACE statements in autocommit is horrid slow (though for some reason my server copes and just doesn't care).

Having an open transaction for 3.4 million changes when importing the listings file, for example, takes 168 s on my development PC just to carry out the commit of the transaction. However, having a transaction for every insert or update separately significantly slows down the processing by many times more than 168 s and is not really the "ideal" way to use transactions.

Precisely, and in autocommit, each sql statement is considered to be a transaction.

It seems to me that creating a transaction for each market is a much better way to go. Think of the "atomic" part of ACID with regards to a transaction. Having a single transaction for the entire listings file is nonsense as it means either the entire import works or it fails entirely. Atomic for a single market is better, either the entire market is committed or just that market fails.

As I posted above, that is our thinking also, but it's good to see that reinforced.

If you already knew this then ignore the post :)

I think we did, but again as we are on a learning curve for this, it's useful to hear back that your thinking is in line with ours.

[1] There are other third party sqlite libraries for python that do things differently, maybe better for us, but we always strongly prefer to use what ships as standard to avoid complication.
 
I did some testing over the last few days on the updates to tradedangerous.db from listings.csv since I still think that using the exceptions as decision logic is flawed.

Unless I misunderstood something, the premise of your test is flawed. Listings.csv is read by the plugin, never by the listener.

The listener deals with one market update at a time, even over the course of an entire day, listings.csv has orders of magnitude more data than comes from the eddn firehose.
 
I would do it like this:
  • build item list
  • start transaction
  • delete all items from station
  • insert all items with executemany
  • commit transaction
And maybe don't open and close the connection all the time.

This feels right to me. I know Mark has concerns about the deletes, but I think it's worth looking at. It may well be that rolling our own transaction, rather than relying on pysqlite to do it for us, gives the efficiency we need for that process to complete in a timely manner.

It also completely removes the need for the INSERT/UPDATE exception logic that Mark is suspicious of and so has to be worth a try.

The WAL mode for the journal is also a good idea.

I have the server running in WAL mode already. I am also running synchronous off and temp store in memory. In the unlikely event of power failure and database corruption it is trivial to rebuild the db, so tuning for speed over reliability.
 
This feels right to me. I know Mark has concerns about the deletes, but I think it's worth looking at. It may well be that rolling our own transaction, rather than relying on pysqlite to do it for us, gives the efficiency we need for that process to complete in a timely manner.

It also completely removes the need for the INSERT/UPDATE exception logic that Mark is suspicious of and so has to be worth a try.

I made some simple tests and the difference is not that much:
DELETE/INSERT -> 115.11146521568298 seconds
INSERT/UPDATE -> 113.50208044052124 seconds

This was with 1000 iterations over one station with 80 items.
 
Last edited:
Back
Top Bottom