Not a directly Elite question, planning to use it in an Elite utility though, and kinda hoping someone will know
I have a query which is kinda nested/involved but still only needs ~150 ms total to return ~100 rows using the sqlite3 command line client.
If I put the same exact SQL into a C program and use the low level sqlite_prepare_v2 and sqlite_step functions, it needs ~100 ms PER ROW, ~10000 ms total.
That's ~100 ms per call to sqlite_step, that call is what blocks per row iterated when using this query.
Query itself looks like this...
There's nothing wrong with the query logic, it's doing exactly what I want, grouped results with normalisation factors available.
Between aggregate functions, sub queries, joins, JSON functions and groupings it's probably the groupings crippling it somehow.
I'm open to optimisations, but that's still totally independent from understanding why the API can't beat the CLI using THAT query
For some numbers it's ~100000 stored rows getting grouped down to ~100 result rows.
I've tried different SQLite versions, different C compiler flags (I'm using -O3), different SQLite compile options.
I've also tried different libraries/ports for other runtimes, they all just wrap/link the original SQLite binary and inherit the same problem.
Speed differences from anything like that are a few % at most, nowhere near the order of magnitude speedup with the sqlite3 client.
The dumb part is that I can write my program to run the sqlite3 client as a sub process with the file and query as command line args then parse the human formatted output to get the results.
It's literally 50-100x faster to go via that sub process than to use the actual SQLite C API from a C program.
I really don't want that to be the solution though, and I can't figure out what's the active difference here

I have a query which is kinda nested/involved but still only needs ~150 ms total to return ~100 rows using the sqlite3 command line client.
If I put the same exact SQL into a C program and use the low level sqlite_prepare_v2 and sqlite_step functions, it needs ~100 ms PER ROW, ~10000 ms total.
That's ~100 ms per call to sqlite_step, that call is what blocks per row iterated when using this query.
Query itself looks like this...
SQL:
select dat.id, dat.k, dat.name, sum(c.ww) weight, sum(c.ww*n.c_norm) norm
from (select c_id, w_ids, 1.0/json_array_length(w_ids) ww from clist where w_ids!='[]') c
join (select c_id, 1.0/count() c_norm from clist group by c_id) n on n.c_id=c.c_id
left join json_each(c.w_ids) w
join dat on w.value=dat.id
group by dat.id
Between aggregate functions, sub queries, joins, JSON functions and groupings it's probably the groupings crippling it somehow.
I'm open to optimisations, but that's still totally independent from understanding why the API can't beat the CLI using THAT query
For some numbers it's ~100000 stored rows getting grouped down to ~100 result rows.
I've tried different SQLite versions, different C compiler flags (I'm using -O3), different SQLite compile options.
I've also tried different libraries/ports for other runtimes, they all just wrap/link the original SQLite binary and inherit the same problem.
Speed differences from anything like that are a few % at most, nowhere near the order of magnitude speedup with the sqlite3 client.
The dumb part is that I can write my program to run the sqlite3 client as a sub process with the file and query as command line args then parse the human formatted output to get the results.
It's literally 50-100x faster to go via that sub process than to use the actual SQLite C API from a C program.
I really don't want that to be the solution though, and I can't figure out what's the active difference here
