This feels right to me. I know Mark has concerns about the deletes, but I think it's worth looking at. It may well be that rolling our own transaction, rather than relying on pysqlite to do it for us, gives the efficiency we need for that process to complete in a timely manner.
It also completely removes the need for the INSERT/UPDATE exception logic that Mark is suspicious of and so has to be worth a try.
I have the server running in WAL mode already. I am also running synchronous off and temp store in memory. In the unlikely event of power failure and database corruption it is trivial to rebuild the db, so tuning for speed over reliability.
For the listener this is probably the better way to go since holding a 3.4 million entry hash table in memory takes up a fair bit of space. Rolling your own transaction I think will be a much better idea, it is not a difficult procedure and gives you the fine control that the library you are currently using does not.
I'll writte another test that does the delete all, vacuum and insert and see how that performs.
Unless I misunderstood something, the premise of your test is flawed. Listings.csv is read by the plugin, never by the listener.
The listener deals with one market update at a time, even over the course of an entire day, listings.csv has orders of magnitude more data than comes from the eddn firehose.
Nope, I was talking about the import of the listings.csv and listings-live.csv files into the trade dangerous database. The comment about the listener was an aside. I should have been more clear about that.