Purpose

This document explores how multiple clients can write to SQLite databases concurrently, evaluating native SQLite capabilities and third-party replication tools including Litestream, LiteFS, and alternatives.

Key Findings

Native SQLite Concurrent Writes

SQLite’s fundamental architecture uses a single-writer, multiple-reader model:

  • WAL Mode: Write-Ahead Logging mode allows readers and writers to operate concurrently - readers don’t block writers and writers don’t block readers
  • Write Serialization: Only one writer can hold the write lock at a time; other write attempts will fail with SQLITE_BUSY errors
  • Practical Workaround: Use WAL mode with a high lock timeout value so writers can take turns (most write transactions complete in milliseconds)

BEGIN CONCURRENT (Experimental)

SQLite has an experimental BEGIN CONCURRENT feature (not yet in mainline):

  • Multiple Writers: Allows multiple write transactions to process simultaneously in WAL or WAL2 mode
  • Deferred Locking: Database locking is deferred until COMMIT, allowing concurrent transaction processing
  • Serialized Commits: COMMIT commands are still serialized to maintain consistency
  • Status: Experimental branch only, not available in official releases

Turso’s MVCC Implementation

Turso (commercial SQLite platform) achieved production-ready concurrent writes in 2025:

  • Performance: Up to 4x write throughput compared to standard SQLite
  • No SQLITE_BUSY: Eliminates the common blocking error
  • MVCC-based: Uses Multi-Version Concurrency Control for true concurrent writes

Litestream Multi-Writer Support

Answer: No, Litestream does NOT support multiple writers.

What Litestream Does

  • Disaster Recovery: Continuously streams SQLite database changes to S3-compatible storage
  • One Writer Only: Designed for single-writer, multiple-reader architectures
  • Read Replicas: Supports live read-only replicas via HTTP streaming
  • WAL Mode Required: Only works with SQLite WAL journaling mode

Litestream Architecture Limitations

  • Multiple applications replicating to the same bucket/path will cause restore failures
  • Requires periodic write locks during checkpointing (may conflict with application writes)
  • Best suited for backup/recovery, not distributed multi-writer scenarios
  • Future multi-writer support is planned but not available yet (as of 2025)

When to Use Litestream

  • Single application instance writing to SQLite
  • Need point-in-time recovery and disaster recovery
  • Want continuous backup to cloud storage
  • Creating read-only replicas for scaling reads

Alternative Solutions for Multiple Writers

LiteFS (Fly.io)

Architecture: Distributed file system with transparent SQLite replication

  • Primary/Replica Model: One primary node, automatic replication to other nodes
  • Write Forwarding: Non-primary nodes can accept writes and forward to primary
  • Transparent: Minimal application changes required
  • Status: Production-ready but pre-1.0 (APIs may change)
  • Limitation: Still fundamentally single-writer at the primary level

Best For:

  • Multi-region deployments with transparent failover
  • Applications that can tolerate write forwarding latency
  • Teams wanting SQLite with minimal operational complexity

rqlite

Architecture: Distributed database with Raft consensus

  • True Clustering: Full distributed consensus for writes
  • Automatic Failover: Built-in high availability
  • Network Protocol: Uses HTTP API, not SQLite file access
  • Trade-off: Requires application changes (not drop-in SQLite replacement)

Best For:

  • Write-heavy applications requiring true distributed writes
  • Applications needing automatic failover
  • Teams comfortable with clustering complexity

dqlite

Architecture: Distributed SQLite using Raft consensus

  • C Library: Can be embedded in Go/C applications
  • Canonical’s Choice: Powers LXD clustering
  • Raft-based: Similar to rqlite but at library level
  • Integration: Requires code changes to use distributed API

Best For:

  • Embedded systems requiring HA
  • Control plane databases in distributed systems
  • Applications already using Go or C

Architecture Decision Guide

Use Standard SQLite + WAL Mode When:

  • Single application instance
  • Write volume is moderate (writers can take turns)
  • Simplicity is paramount
  • Sub-millisecond latency required

Use Litestream When:

  • Single writer with disaster recovery needs
  • Continuous backup to S3/cloud storage required
  • Read replicas needed for scaling reads
  • Point-in-time recovery is important

Use LiteFS When:

  • Multi-region deployment required
  • Want transparent SQLite distribution
  • Can tolerate write forwarding latency
  • Minimal application changes preferred

Use rqlite/dqlite When:

  • True distributed writes required
  • Write-heavy, globally distributed workload
  • Automatic failover is critical
  • Can modify application to use network protocol

Consider PostgreSQL When:

  • Write concurrency is primary requirement
  • Need mature replication with multiple primaries
  • Application complexity justifies operational overhead
  • ACID guarantees across distributed writes required

Recommendations

For most use cases requiring multiple writers:

  1. Re-evaluate the requirement: Can you partition writes? Use job queues? Funnel writes through a single service?
  2. Start with WAL mode: Try standard SQLite with WAL and proper timeout handling
  3. Consider LiteFS: If you need distribution, LiteFS provides the smoothest path
  4. Watch Turso: Their MVCC implementation may become the production standard
  5. When in doubt: PostgreSQL has solved concurrent writes for decades

Sources

  1. Beyond the Single-Writer Limitation with Turso’s Concurrent Writes
  2. SQLite: Begin Concurrent
  3. Write-Ahead Logging
  4. SQLite concurrent writes and “database is locked” errors
  5. Tips & Caveats - Litestream
  6. How to setup for multi node? - Litestream Discussion
  7. Test concurrent write support - Litestream Issue
  8. LiteFS - Distributed SQLite
  9. LiteFS vs Litestream vs rqlite vs dqlite on VPS in 2025
  10. I Migrated from a Postgres Cluster to Distributed SQLite with LiteFS