Database replication
QuestDB Enterprise offers primary-replica replication with eventual consistency.
This document will teach you about the architecture, some configuration, a roadmap and limitations.
For a self-hosted setup guide and configuration details, see the Replication page within Operations.
QuestDB Cloud customers can drive replication and point-in-time recovery via the Cloud UI.
#
ArchitectureReplication in QuestDB operates by designating a single "primary" database that uploads Write Ahead Log (WAL) files to a remote object store. These files can be downloaded and applied by any number of "replica" instances, either continuously or at a later time.
Primary instances offer the same features as stand-alone instances, while replicas are read-only, designed to distribute data and load across multiple locations and instances.
Two common storage strategies are:
Hot availability: A read-only replica operates alongside the primary node, enabling quick switch-over in case of primary failure. Multiple replicas can follow a single primary. It is faster and more expensive.
Cold availability: A new primary node can be reconstructed from the latest snapshot and WAL files in the object store, without a constantly running replica. It is slower and often cheaper.
Replicating instances communicate via the object store, and not direct network connections.
This results in a decoupled, flexible architecture with high availability.
Key benefits include:
- Replica instances do not impact the performance of the primary instance, regardless of their number
- Object store-held replication data can enable "point in time recovery", restoring an instance to a specific timestamp. This works with full database nightly backup snapshots
The architecture consists of:
- A primary database instance
- An object store, such as AWS S3 or Azure Blob Storage
- Any number of replica instances

#
Supported Object StoresThe Write Ahead Log (WAL) object storage layer used in QuestDB replication is designed to be plug-able and extensible.
We provide native support for AWS S3 and Azure Blob Storage.
HDFS and distributed file systems such as NFS will follow closely.
We also plan to support other object stores such as Google Cloud Storage:

Something missing? Want to see it sooner? Contact us!
#
Object Store ConfigurationA replication stream is a section of an object store that holds a window of data
for replication. It isolates different replicating database configurations and
allows them to live side-by-side without interference. This allows them to share
the same object store service, defined in the replication.object.store
key.
An example of a replication stream object store configuration is:
replication.object.store=s3::bucket=somename;root=a/path;region=us-east-1
.
The string requires a prefix of s3
or azblob
followed by a
semi-colon-delimited set of key-value pairs.
See the Replication setup guide for direct examples.

In addition, the same object store can be re-used for multiple replication
clusters. To do so, provide a unique DB_INSTANCE_NAME
value.
#
LimitationsThe replication and point in time recovery features rely on the presence of a table's Write Ahead Log (WAL) and, for now, only time series tables. WAL tables have a designated timestamp column that is partitioned.
This covers all tables, except that of tables that are used for referential data (i.e. SQL JOIN operations) that don't have a timestamp. These generally tend to contain semi-static data referential data that will have to be updated and kept up to date manually and are often generally just inserted during the database tables setup phase.
This limitation will be lifted in the future.