Performance

SNKV vs SQLite
on KV Workloads

Both engines use the same B-tree storage core, WAL journal, and page cache. The gap is the SQL layer — parsing, planning, and the VDBE interpreter that SNKV eliminates entirely.

1M records · Linux x86-64 · 3 runs averaged · WAL · SYNC_NORMAL
1.64×
faster sequential writes
232K vs 142K ops/s
1.85×
faster sequential scan
2.89M vs 1.56M ops/s
~2×
faster updates & deletes
31K vs 16K ops/s
1.77×
faster random reads
160K vs 90K ops/s

Operation-by-Operation

SQLite uses a WITHOUT ROWID table with a BLOB primary key — the fairest possible comparison. Both share identical page cache, WAL, and sync settings. All differences are purely the SQL layer cost.

SNKV SQLite (WITHOUT ROWID)
Sequential Writes
SNKV
232K ops/s
1.64×
SQLite
142K ops/s
Random Reads
SNKV
160K ops/s
1.77×
SQLite
90K ops/s
Sequential Scan
SNKV
2.89M ops/s
1.85×
SQLite
1.56M ops/s
Exists Checks
SNKV
173K ops/s
1.85×
SQLite
93K ops/s
Mixed Workload
80% read / 20% write
SNKV
62K ops/s
1.79×
SQLite
34K ops/s
Random Updates
SNKV
31K ops/s
1.9×
SQLite
16K ops/s
Random Deletes
SNKV
31K ops/s
~2×
SQLite
16K ops/s
Bulk Insert
1K rows per transaction
SNKV
248K ops/s
1.17×
SQLite
211K ops/s

Methodology

Analysis

Why SNKV Is Faster

SNKV and SQLite use the same B-tree storage engine at the bottom. Every performance difference comes from what each layer adds on top of it.

No SQL parsing
Each SQLite operation tokenises, parses, and validates a SQL string before execution. SNKV calls the B-tree API directly — zero parsing overhead per operation.
No query planner
SQLite chooses an execution plan for every statement. For KV access the plan is always "seek the primary key" — SNKV does exactly that without the planning step.
No VDBE interpreter
SQLite compiles queries to bytecode run by a virtual machine. SNKV calls compiled C functions. No bytecode dispatch loop on the critical path.
Cached read cursor
SNKV keeps a persistent read cursor open per column family. SQLite opens and closes a new cursor for each statement. Cursor open/close touches the pager — SNKV skips it on every read.
Updates & deletes: ~2× gain
These show the largest gap because each SQLite call parses, plans, and executes a full UPDATE or DELETE statement. SNKV is a direct B-tree seek + in-place write.
Bulk insert: 1.17× gain
The smallest gap — both commit a single large B-tree transaction, so per-row overhead dominates. Prepared statements close the gap; the storage cost is nearly equal.

Source

Run It Yourself

Both benchmarks are open source. Clone and build to reproduce results on your hardware.

C benchmark — SNKV

# From the SNKV repo root
make test
./tests/test_benchmark

C benchmark — SQLite comparison

# https://github.com/hash-anu/sqllite-benchmark-kv
git clone https://github.com/hash-anu/sqllite-benchmark-kv
cd sqllite-benchmark-kv
make
./sqlite_benchmark

Python benchmark — SNKV vs diskcache

# From the SNKV repo root
pip install diskcache snkv
python python/examples/benchmark_diskcache.py

Additional comparisons against LMDB and RocksDB:
LMDB — github.com/hash-anu/lmdb-benchmark
RocksDB — github.com/hash-anu/rocksdb-benchmark