I wonder if the following things make the C driven version slower...
- prepare the send buffers (sqlite side)
- prepare the receive buffers (go side)
- do the call
- get the received data into go buffers of some kind
- free up the send buffers (happens automatically)
- free up the receive buffers (semi automatically in Go).
When using stdin/stdout, the system looks after send/receive buffers. It's simply reading/writing them. No allocation is needed. The stream can be as big or as little as wanted/needed. The OS will look after the integrity of the streams and these are probably fairly well tested subsystems on most operating systems.
stdin/stdout becomes a "library" for "fast data transfer".
And presumably that implies there's OS context switching going on underneath.
Still, I can see a few downsides. Though sqinn-go is pure Go, the forked process is pure C, so you'll need to either download a prebuilt one (Linux and Windows only atm), or build it yourself. This rather defeats the benefits of Go's killer feature of "single-binary distribution".
I used to use sqlite3 with stdio to read VoIP SQLite data. It's difficult or impossible to get a compatible SQLite version, and it's also hard to use cgo. I want to read the SQLite data on the server, and stdio is the only choice.
Personally I use SQLite in production environments and I don't regret it at all. I don't use Go (I develop in Python - Django mainly) and it has been the best decision ever: no management overhead, easy to backup, no need for difficult environments, etc.
I feel like SQLite is undervalued. I do agree that in particular cases might not be the best, but more often than not I see that SQLite is more than enough database. Using Postgres or MySQL for the sake of being "production grade" is never a good idea. SQLite is also production grade. Watching at the statistics (look at sqinn) I would state that 90% of the internet could use SQLite without any issue and only benefits.
The main reason I use postgres instead of SQLite is that I have multiple processes accessing the database, often 1 web service for API/Website and a worker running in the background doing heavy tasks (e.g. image processing). Both need access to the database and SQLite will run into locking issues.
It's on by default in many sqlite drivers because it really is the best default. But it isn't on by default in upstream sqlite even though it's been out for ages now.
Sure but once you have WAL logs, you suddenly have a more heavy weight setup. Backing it up you'll want to back up those WAL logs to achieve proper point in time recovery, and so on. My point is, you're now bolting on extra stuff on it to do things that Postgres can do (which can be pretty light weight). Not disrespecting SQLite, still one of my favorite DB's.
Now the WAL content is rolled into your new backup file. Stick a timestamp in the backup file name and run this as a cron job every N minutes and you have all the recovery you need. Another one-liner to sync to S3 and you're all set.
Edit: And just to clarify, that command can be run on a live DB as it's being used by your app server. SQLite handles external concurrent readers just fine.
I've been using the modernc driver for a few years in https://github.com/bbkane/enventory . It's worked perfectly with no drama. Combined with https://sqlc.dev/, I've been very happy writing (small) database applications in Go.
This is interesting and very timely for me. Just this week I was building a small Go system that uses SQLite. I needed to cross-compile it for FreeBSD on a Mac and ran into issues with CGO. The easiest fix seemed to be to switch from a CGO based library to a pure Go one.
For a project a while back, I needed to turn many-gigabyte Postgres CSV table dumps into SQLite databases. I turned to Go as its a great language for easy parallelism combined with enough memory layout control to get relatively good performance.
I quickly ruled out using database/sql drivers as the indirection through interface types added a bunch of overhead and stymied my attempts for reasonable memory layout. For my use-case, I found the crawshaw driver performed the best, but I ended up forking it as well as the Golang standard library CSV parser as I found defensive copying & allocation was the largest bottleneck. I ended up cycling several very large arenas among a CSV parser thread that filled the arena with column bytes and several threads writing to different temporary sqlite databases. Then at the end I ATTACHED them together and copied them into one big file (idk exactly why this is faster, but my profiles showed most cpu time spent in sqlite doing query binding things so MOAR CORES).
One notable optimization was exposing a way to bind borrowed bytes to query parameters without inducing a copy in either Golang caller code, or SQLite library code. The crawshaw driver upstream only exposes sqlite_bind_blob with SQLITE_TRANSIENT mode, which tells SQLite to copy the input to a private allocation before returning from the sqlite_bind* call. I added a version that passes SQLITE_STATIC, which means "trust me, I won't touch these bytes until the query is done, and I'll free them afterwards". This is safe in Rust who's "borrow" and "lifetime" concept models this perfectly, but I guess in Golang its dicey enough to not expose in your public package.
I'm curious how OP's https://github.com/cvilsmeier/sqinn would fare, I'm somewhat sus about copying 200GB to stdin but the benchmark results are pretty good so ¯\_(ツ)_/¯
I have done a lot of data migrations between sqlite -> postgres and such using duckdb. It works great but does not seem to perform well. I'd simply leave an instance churning data but a specialized small cli tool would probably work a lot faster.
This library is wild https://github.com/cvilsmeier/sqinn
Sqlite over stdin, to a subprocess, and it's fast!
It's wild to me that stdin/stdout is apparently significantly faster than using the API in so many cases.
That's the kind of result that makes me wonder if there is something odd with the benchmarking.
That's an interesting thought. I wonder.
I wonder if the following things make the C driven version slower...
- prepare the send buffers (sqlite side)
- prepare the receive buffers (go side)
- do the call
- get the received data into go buffers of some kind
- free up the send buffers (happens automatically)
- free up the receive buffers (semi automatically in Go).
When using stdin/stdout, the system looks after send/receive buffers. It's simply reading/writing them. No allocation is needed. The stream can be as big or as little as wanted/needed. The OS will look after the integrity of the streams and these are probably fairly well tested subsystems on most operating systems.
stdin/stdout becomes a "library" for "fast data transfer".
Pretty neat.
fwiw, the tailscale fork of Crawshaw’s library has a good number of allocation removals and other optimizations, but cgo is still expensive.
And presumably that implies there's OS context switching going on underneath.
Still, I can see a few downsides. Though sqinn-go is pure Go, the forked process is pure C, so you'll need to either download a prebuilt one (Linux and Windows only atm), or build it yourself. This rather defeats the benefits of Go's killer feature of "single-binary distribution".
Still, I agree it's wild it is so fast.
note that the author of the benchmark is also author of this library.
I used to use sqlite3 with stdio to read VoIP SQLite data. It's difficult or impossible to get a compatible SQLite version, and it's also hard to use cgo. I want to read the SQLite data on the server, and stdio is the only choice.
That's an interesting approach but doesn't it mean that if you want multiple connections at the same time, you'd need multiple subprocesses?
Perhaps I misunderstand though.
You are correct: Multiple connections means multiple sqinn subprocesses.
Personally I use SQLite in production environments and I don't regret it at all. I don't use Go (I develop in Python - Django mainly) and it has been the best decision ever: no management overhead, easy to backup, no need for difficult environments, etc.
I feel like SQLite is undervalued. I do agree that in particular cases might not be the best, but more often than not I see that SQLite is more than enough database. Using Postgres or MySQL for the sake of being "production grade" is never a good idea. SQLite is also production grade. Watching at the statistics (look at sqinn) I would state that 90% of the internet could use SQLite without any issue and only benefits.
The main reason I use postgres instead of SQLite is that I have multiple processes accessing the database, often 1 web service for API/Website and a worker running in the background doing heavy tasks (e.g. image processing). Both need access to the database and SQLite will run into locking issues.
How do you overcome this with SQLite and Django?
Afaik the fix for that is to have multiple read only connections and one write only connection.
Yes by enabling the write ahead log feature: https://sqlite.org/wal.html
It's on by default in many sqlite drivers because it really is the best default. But it isn't on by default in upstream sqlite even though it's been out for ages now.
Sure but if you're dealing with WAL logs, why not just go Postgres? Then you also get a port you can connect to from remote machines if you need.
> "dealing with WAL"
What's there to deal with? You turn it on with a pragma and forget about it.
Sure but once you have WAL logs, you suddenly have a more heavy weight setup. Backing it up you'll want to back up those WAL logs to achieve proper point in time recovery, and so on. My point is, you're now bolting on extra stuff on it to do things that Postgres can do (which can be pretty light weight). Not disrespecting SQLite, still one of my favorite DB's.
What? Why are you backing up the WAL?
Now the WAL content is rolled into your new backup file. Stick a timestamp in the backup file name and run this as a cron job every N minutes and you have all the recovery you need. Another one-liner to sync to S3 and you're all set.Edit: And just to clarify, that command can be run on a live DB as it's being used by your app server. SQLite handles external concurrent readers just fine.
I've been using the modernc driver for a few years in https://github.com/bbkane/enventory . It's worked perfectly with no drama. Combined with https://sqlc.dev/, I've been very happy writing (small) database applications in Go.
This is interesting and very timely for me. Just this week I was building a small Go system that uses SQLite. I needed to cross-compile it for FreeBSD on a Mac and ran into issues with CGO. The easiest fix seemed to be to switch from a CGO based library to a pure Go one.
I don't know for freebsd but at least for Linux I started using the zig toolchain and it's wonderful. https://zig.news/kristoff/building-sqlite-with-cgo-for-every...
I hit the same issue, building on mac to deploy to linux.
I added this build step before `scp`ing the binary to the server
Looking at the article, I should give modernc sqlite driver a try.Nit: "For benchmarks I used the following libraries: <snip>". This is begging to be a table.
You could do the edit for him and create a pull request. I did that for a really small mistake in the README.md of sqinn.
I see what you mean, there are some categories there (cGO based or not) that lend themselves to quick understanding via a table.
I had the same thought. A table indeed!
You're right. A table. I've included one.
For a project a while back, I needed to turn many-gigabyte Postgres CSV table dumps into SQLite databases. I turned to Go as its a great language for easy parallelism combined with enough memory layout control to get relatively good performance.
I quickly ruled out using database/sql drivers as the indirection through interface types added a bunch of overhead and stymied my attempts for reasonable memory layout. For my use-case, I found the crawshaw driver performed the best, but I ended up forking it as well as the Golang standard library CSV parser as I found defensive copying & allocation was the largest bottleneck. I ended up cycling several very large arenas among a CSV parser thread that filled the arena with column bytes and several threads writing to different temporary sqlite databases. Then at the end I ATTACHED them together and copied them into one big file (idk exactly why this is faster, but my profiles showed most cpu time spent in sqlite doing query binding things so MOAR CORES).
One notable optimization was exposing a way to bind borrowed bytes to query parameters without inducing a copy in either Golang caller code, or SQLite library code. The crawshaw driver upstream only exposes sqlite_bind_blob with SQLITE_TRANSIENT mode, which tells SQLite to copy the input to a private allocation before returning from the sqlite_bind* call. I added a version that passes SQLITE_STATIC, which means "trust me, I won't touch these bytes until the query is done, and I'll free them afterwards". This is safe in Rust who's "borrow" and "lifetime" concept models this perfectly, but I guess in Golang its dicey enough to not expose in your public package.
Here's the relevant commit in my fork: https://github.com/crawshaw/sqlite/commit/82ad4f03528e8fdc6a...
I'm curious how OP's https://github.com/cvilsmeier/sqinn would fare, I'm somewhat sus about copying 200GB to stdin but the benchmark results are pretty good so ¯\_(ツ)_/¯
There may be an opportunity to switch from TRANSIENT to STATIC in sqinn but I didn't read deeply enough to follow what the current memory approach is. https://github.com/cvilsmeier/sqinn/blob/a88b3df6c89f0531e39...
According to my private tests, using STATIC instead of TRANSIENT does not yield a significant performance boost (maybe 1 to 5 percent, if at all).
I mean I love to see a 0.5% performance boost in my code at work. I would be all over a 5% boost, I think anything above 1% is significant!
Wouldn’t duckdb be a good fit for this?
I have done a lot of data migrations between sqlite -> postgres and such using duckdb. It works great but does not seem to perform well. I'd simply leave an instance churning data but a specialized small cli tool would probably work a lot faster.
Excellent evaluation. From reading the code, it appears that the units for the numbers column is usually milliseconds (ms)
It also looks like squinn is the clear leader for most but not all of the benchmarks.
Even though it's "not scientific", is still very useful as a baseline - thanks for taking this effort and publishing your results!
Also taking a look at monibot.io , looks cool
Thank you.
I apologize for the side question, but what are people using for Golang SQlite cipher/encryption combination?