Configuration
This page describes methods for configuring QuestDB server settings.
Configuration can be set either:
- In the
server.conf
configuration file available in the root directory - Using environment variables
When a key is absent from both the config file and the environment variables, the default value is used.
For Windows users
When entering path values, use either \\
or /
instead of the native path
separator char \
.
- π
C:\\path\\to\\file\\path
- π
C:/path/to/file
- π
C:\path\to\file
The single backslash is interpreted as an escape sequence start within Java properties.
Environment variablesβ
All settings in the configuration file can be set or overridden using
environment variables. If a key is set in both the server.conf
file and via an
environment variable, the environment variable will take precedence and the
value in the server configuration file will be ignored.
To make these configuration settings available to QuestDB via environment variables, they must be in the following format:
QDB_<KEY_OF_THE_PROPERTY>
Where <KEY_OF_THE_PROPERTY>
is equal to the configuration key name. To
properly format a server.conf
key as an environment variable it must have:
QDB_
prefix- uppercase characters
- all
.
period characters replaced with_
underscore
For example, the server configuration key for shared workers must be passed as described below:
server.conf key | env var |
---|---|
shared.worker.count | QDB_SHARED_WORKER_COUNT |
QuestDB applies these configuration changes on startup and a running instance must be restarted in order for configuration changes to take effect
Examplesβ
The following configuration property customizes the number of worker threads shared across the application:
shared.worker.count=5
export QDB_SHARED_WORKER_COUNT=5
Keys and default valuesβ
This section lists the configuration keys available to QuestDB by topic or
subsystem. Parameters for specifying buffer and memory page sizes are provided
in the format n<unit>
, where <unit>
can be one of the following:
m
for MBk
for kB
For example:
http.net.connection.sndbuf=2m
Shared workerβ
Shared worker threads service SQL execution subsystems and (in the default configuration) every other subsystem.
Property | Default | Description |
---|---|---|
shared.worker.count | Number of worker threads shared across the application. Increasing this number will increase parallelism in the application at the expense of CPU resources. | |
shared.worker.affinity | Comma-delimited list of CPU ids, one per thread specified in | |
shared.worker.haltOnError | false | Flag that indicates if the worker thread must stop when an unexpected error occurs. |
HTTP serverβ
This section describes configuration settings for the Web Console and the REST
API available by default on port 9000
. For details on the use of this
component, refer to the web console documentation page.
Property | Default | Description |
---|---|---|
http.enabled | true | Enable or disable HTTP server. |
http.bind.to | 0.0.0.0:9000 | IP address and port of HTTP server. A value of |
http.user | N/A | Username for HTTP Basic Authentication in QuestDB Open Source. QuestDB Enterprise Edition supports more advanced authentication mechanisms: RBAC |
http.password | N/A | Password for HTTP Basic Authentication in QuestDB Open Source. QuestDB Enterprise Edition supports more advanced authentication mechanisms: RBAC |
http.net.connection.limit | 64 | The maximum number permitted for simultaneous TCP connection to the HTTP server. The rationale of the value is to control server memory consumption. |
http.net.connection.timeout | 300000 | TCP connection idle timeout in milliseconds. Connection is closed by HTTP server when this timeout lapses. |
http.net.connection.sndbuf | 2M | Maximum send buffer size on each TCP socket. If this value is |
http.net.connection.rcvbuf | 2M | Maximum receive buffer size on each TCP socket. If this value is |
http.net.connection.hint | false | Windows specific flag to overcome OS limitations on TCP backlog size |
http.net.connection.queue.timeout | 5000 | Amount of time in milliseconds a connection can wait in the listen backlog queue before it is refused. Connections will be aggressively removed from the backlog until the active connection limit is breached. |
http.net.bind.to | 0.0.0.0:9000 | IP address and port of HTTP server. |
http.connection.pool.initial.capacity | 4 | Initial size of pool of reusable objects that hold connection state. The pool should be configured to maximum realistic load so that it does not resize at runtime. |
http.connection.string.pool.capacity | 128 | Initial size of the string pool shared by the HTTP header and multipart content parsers. |
http.multipart.header.buffer.size | 512 | Buffer size in bytes used by the HTTP multipart content parser. |
http.multipart.idle.spin.count | 10000 | How long the code accumulates incoming data chunks for column and delimiter analysis. |
http.receive.buffer.size | 1M | Size of receive buffer. |
http.request.header.buffer.size | 64K | Size of internal buffer allocated for HTTP request headers. The value is rounded up to the nearest power of 2. When HTTP requests contain headers that exceed the buffer size server will disconnect the client with HTTP error in server log. |
http.response.header.buffer.size | 32K | Size of the internal response buffer. The value will be rounded up to the nearest power of 2. The buffer size should be large enough to accommodate max size of server response headers. |
http.worker.count | 0 | Number of threads in private worker pool. When |
http.worker.affinity | Comma separated list of CPU core indexes. The number of items in this list must be equal to the worker count. | |
http.worker.haltOnError | false | Changing the default value is strongly discouraged. Flag that indicates if the worker thread must stop when an unexpected error occurs. |
http.send.buffer.size | 2M | Size of the internal send buffer. Larger buffer sizes result in fewer I/O interruptions the server is making at the expense of memory usage per connection. There is a limit of send buffer size after which increasing it stops being useful in terms of performance. 2MB seems to be optimal value. |
http.static.index.file.name | index.html | Name of index file for the Web Console. |
http.frozen.clock | false | Sets the clock to always return zero. This configuration parameter is used for internal testing. |
http.allow.deflate.before.send | false | Flag that indicates if Gzip compression of outgoing data is allowed. |
http.keep-alive.timeout | 5 | Used together with |
http.keep-alive.max | 10000 | See |
http.static.public.directory | public | The name of directory for public web site. |
http.text.date.adapter.pool.capacity | 16 | Size of date adapter pool. This should be set to the anticipated maximum number of |
http.text.json.cache.limit | 16384 | JSON parser cache limit. Cache is used to compose JSON elements that have been broken up by TCP protocol. This value limits the maximum length of individual tag or tag value. |
http.text.json.cache.size | 8192 | Initial size of JSON parser cache. The value must not exceed |
http.text.max.required.delimiter.stddev | 0.1222d | The maximum standard deviation value for the algorithm that calculates text file delimiter. Usually when text parser cannot recognise the delimiter it will log the calculated and maximum standard deviation for the delimiter candidate. |
http.text.max.required.line.length.stddev | 0.8 | Maximum standard deviation value for the algorithm that classifies input as text or binary. For the values above configured stddev input will be considered binary. |
http.text.metadata.string.pool.capacity | 128 | The initial size of pool for objects that wrap individual elements of metadata JSON, such as column names, date pattern strings and locale values. |
http.text.roll.buffer.limit | 4M | The limit of text roll buffer. See |
http.text.roll.buffer.size | 1024 | Roll buffer is a structure in the text parser that holds a copy of a line that has been broken up by TCP. The size should be set to the maximum length of text line in text input. |
http.text.analysis.max.lines | 1000 | Number of lines to read on CSV import for heuristics which determine column names & types. Lower line numbers may detect CSV schemas quicker, but possibly with less accuracy. 1000 lines is the maximum for this value. |
http.text.lexer.string.pool.capacity | 64 | The initial capacity of string fool, which wraps |
http.text.timestamp.adapter.pool.capacity | 64 | Size of timestamp adapter pool. This should be set to the anticipated maximum number of |
http.text.utf8.sink.size | 4096 | Initial size of UTF-8 adapter sink. The value should correspond the maximum individual field value length in text input. |
http.json.query.connection.check.frequency | 1000000 | Changing the default value is strongly discouraged. The value to throttle check if client socket has been disconnected. |
http.json.query.float.scale | 4 | The scale value of string representation of |
http.json.query.double.scale | 12 | The scale value of string representation of |
http.query.cache.enabled | true | Enable or disable the query cache. Cache capacity is |
http.query.cache.block.count | 4 | Number of blocks for the query cache. |
http.query.cache.row.count | 16 | Number of rows for the query cache. |
http.security.readonly | false | Forces HTTP read only mode when |
http.security.max.response.rows | 2^63-1 | Limit the number of response rows over HTTP. |
http.security.interrupt.on.closed.connection | true | Switch to enable termination of SQL processing if the HTTP connection is closed. The mechanism affects performance so the connection is only checked after |
http.pessimistic.health.check.enabled | false | When enabled, the health check returns HTTP 500 for any unhandled errors since the server started. |
circuit.breaker.throttle | 2000000 | Number of internal iterations such as loops over data before checking if the HTTP connection is still open |
circuit.breaker.buffer.size | 32 | Size of buffer to read from HTTP connection. If this buffer returns zero and the HTTP client is no longer sending data, SQL processing will be terminated. |
http.server.keep.alive | true | If set to |
http.version | HTTP/1.1 | Protocol version, other supported value is |
Cairo engineβ
This section describes configuration settings for the Cairo SQL engine in QuestDB.
Property | Default | Description |
---|---|---|
query.timeout.sec | 60 | A global timeout (in seconds) for long-running queries. Timeout for each query can override the default by setting HTTP header |
cairo.max.uncommitted.rows | 500000 | Maximum number of uncommitted rows per table, when the number of pending rows reaches this parameter on a table, a commit will be issued. |
cairo.o3.max.lag | 10 minutes | The maximum size of in-memory buffer in milliseconds. The buffer is allocated dynamically through analyzing the shape of the incoming data, and |
cairo.o3.min.lag | 1 second | The minimum size of in-memory buffer in milliseconds. The buffer is allocated dynamically through analyzing the shape of the incoming data, and |
cairo.sql.backup.root | null | Output root directory for backups. |
cairo.sql.backup.dir.datetime.format | null | Date format for backup directory. |
cairo.sql.backup.dir.tmp.name | tmp | Name of tmp directory used during backup. |
cairo.sql.backup.mkdir.mode | 509 | Permission used when creating backup directories. |
cairo.snapshot.instance.id | empty string | Instance id to be included into disk snapshots. |
cairo.snapshot.recovery.enabled | true | When |
cairo.root | db | Directory for storing db tables and metadata. This directory is inside the server root directory provided at startup. |
cairo.commit.mode | nosync | How changes to table are flushed to disk upon commit. Choices: |
cairo.rnd.memory.max.pages | 128 | Sets the max number of pages for memory used by |
cairo.rnd.memory.page.size | 8K | Sets the memory page size used by |
cairo.create.as.select.retry.count | 5 | Number of types table creation or insertion will be attempted. |
cairo.default.map.type | fast | Type of map used. Options: |
cairo.default.symbol.cache.flag | true | When |
cairo.default.symbol.capacity | 256 | Specifies approximate capacity for |
cairo.file.operation.retry.count | 30 | Number of attempts to open files. |
cairo.idle.check.interval | 300000 | Frequency of writer maintenance job in milliseconds. |
cairo.inactive.reader.ttl | 120000 | TTL (Time-To-Live) to close inactive readers in milliseconds. |
cairo.wal.inactive.writer.ttl | 120000 | TTL (Time-To-Live) to close inactive WAL writers in milliseconds. |
cairo.inactive.writer.ttl | 600000 | TTL (Time-To-Live) to close inactive writers in milliseconds. |
cairo.index.value.block.size | 256 | Approximation of number of rows for a single index key, must be power of 2. |
cairo.max.swap.file.count | 30 | Number of attempts to open swap files. |
cairo.mkdir.mode | 509 | File permission mode for new directories. |
cairo.parallel.index.threshold | 100000 | Minimum number of rows before allowing use of parallel indexation. |
cairo.reader.pool.max.segments | 10 | Number of segments in the table reader pool. Each segment holds up to 32 readers. |
cairo.wal.writer.pool.max.segments | 10 | Number of segments in the WAL writer pool. Each segment holds up to 32 writers. |
cairo.spin.lock.timeout | 1000 | Timeout when attempting to get BitmapIndexReaders in millisecond. |
cairo.character.store.capacity | 1024 | Size of the CharacterStore. |
cairo.character.store.sequence.pool.capacity | 64 | Size of the CharacterSequence pool. |
cairo.column.pool.capacity | 4096 | Size of the Column pool in the SqlCompiler. |
cairo.compact.map.load.factor | 0.7 | Load factor for CompactMaps. |
cairo.expression.pool.capacity | 8192 | Size of the ExpressionNode pool in SqlCompiler. |
cairo.fast.map.load.factor | 0.5 | Load factor for all FastMaps. |
cairo.sql.join.context.pool.capacity | 64 | Size of the JoinContext pool in SqlCompiler. |
cairo.lexer.pool.capacity | 2048 | Size of FloatingSequence pool in GenericLexer. |
cairo.sql.map.key.capacity | 2M | Key capacity in FastMap and CompactMap. |
cairo.sql.map.max.resizes | 2^31 | Number of map resizes in FastMap and CompactMap before a resource limit exception is thrown, each resize doubles the previous size. |
cairo.sql.map.page.size | 4m | Memory page size for FastMap and CompactMap. |
cairo.sql.map.max.pages | 2^31 | Memory max pages for CompactMap. |
cairo.model.pool.capacity | 1024 | Size of the QueryModel pool in the SqlCompiler. |
cairo.sql.sort.key.page.size | 4M | Memory page size for storing keys in LongTreeChain. |
cairo.sql.sort.key.max.pages | 2^31 | Max number of pages for storing keys in LongTreeChain before a resource limit exception is thrown. |
cairo.sql.sort.light.value.page.size | 1048576 | Memory page size for storing values in LongTreeChain. |
cairo.sql.sort.light.value.max.pages | 2^31 | Max pages for storing values in LongTreeChain. |
cairo.sql.hash.join.value.page.size | 16777216 | Memory page size of the slave chain in full hash joins. |
cairo.sql.hash.join.value.max.pages | 2^31 | Max pages of the slave chain in full hash joins. |
cairo.sql.latest.by.row.count | 1000 | Number of rows for LATEST BY. |
cairo.sql.hash.join.light.value.page.size | 1048576 | Memory page size of the slave chain in light hash joins. |
cairo.sql.hash.join.light.value.max.pages | 2^31 | Max pages of the slave chain in light hash joins. |
cairo.sql.sort.value.page.size | 16777216 | Memory page size of file storing values in SortedRecordCursorFactory. |
cairo.sql.sort.value.max.pages | 2^31 | Max pages of file storing values in SortedRecordCursorFactory. |
cairo.work.steal.timeout.nanos | 10000 | Latch await timeout in nanos for stealing indexing work from other threads. |
cairo.parallel.indexing.enabled | true | Allows parallel indexation. Works in conjunction with cairo.parallel.index.threshold. |
cairo.sql.join.metadata.page.size | 16384 | Memory page size for JoinMetadata file. |
cairo.sql.join.metadata.max.resizes | 2^31 | Number of map resizes in JoinMetadata before a resource limit exception is thrown, each resize doubles the previous size. |
cairo.sql.analytic.column.pool.capacity | 64 | Size of AnalyticColumn pool in SqlParser. |
cairo.sql.create.table.model.pool.capacity | 16 | Size of CreateTableModel pool in SqlParser. |
cairo.sql.create.table.model.batch.size | 1000000 | Batch size for non-atomic CREATE AS SELECT statements. |
cairo.sql.column.cast.model.pool.capacity | 16 | Size of CreateTableModel pool in SqlParser. |
cairo.sql.rename.table.model.pool.capacity | 16 | Size of RenameTableModel pool in SqlParser. |
cairo.sql.with.clause.model.pool.capacity | 128 | Size of WithClauseModel pool in SqlParser. |
cairo.sql.insert.model.pool.capacity | 64 | Size of InsertModel pool in SqlParser. |
cairo.sql.insert.model.batch.size | 1000000 | Batch size for non-atomic INSERT INTO SELECT statements. |
cairo.sql.copy.model.pool.capacity | 32 | Size of CopyModel pool in SqlParser. |
cairo.sql.copy.buffer.size | 2M | Size of buffer used when copying tables. |
cairo.sql.double.cast.scale | 12 | Maximum number of decimal places that types cast as doubles have. |
cairo.sql.float.cast.scale | 4 | Maximum number of decimal places that types cast as floats have. |
cairo.sql.copy.formats.file | /text_loader.json | Name of file with user's set of date and timestamp formats. |
cairo.sql.jit.mode | on | JIT compilation for SQL queries. May be disabled by setting this value to |
cairo.sql.jit.debug.enabled | false | Sets debug flag for JIT compilation. When enabled, assembly will be printed into |
cairo.sql.jit.bind.vars.memory.page.size | 4K | Sets the memory page size for storing bind variable values for JIT compiled filter. |
cairo.sql.jit.bind.vars.memory.max.pages | 8 | Sets the maxΒ memory pages for storing bind variable values for JIT compiled filter. |
cairo.sql.jit.rows.threshold | 1M | Sets minimum number of rows to shrink filtered rows memory after query execution. |
cairo.sql.jit.page.address.cache.threshold | 1M | Sets minimum cache size to shrink page address cache after query execution. |
cairo.sql.jit.ir.memory.page.size | 8K | Sets the memory page size for storing IR for JIT compilation. |
cairo.sql.jit.ir.max.pages | 8 | Sets max memory pages for storing IR for JIT compilation. |
cairo.sql.page.frame.min.rows | 1000 | Sets the minimum number of rows in page frames used in SQL queries. |
cairo.sql.page.frame.max.rows | 1000000 | Sets the maximum number of rows in page frames used in SQL. queries |
cairo.sql.sampleby.page.size | 0 | SampleBy index query page size. Max values returned in single scan. 0 is default, and it means to use symbol block capacity. |
cairo.sql.sampleby.default.alignment.calendar | 0 | SampleBy default alignment behaviour. true corresponds to ALIGN TO CALENDAR, false corresponds to ALIGN TO FIRST OBSERVATION. |
cairo.date.locale | en | The locale to handle date types. |
cairo.timestamp.locale | en | The locale to handle timestamp types. |
cairo.o3.column.memory.size | 256k | Memory page size per column for O3 operations. Please be aware O3 will use 2x of the set value per column (therefore a default of 2x256kb). |
cairo.writer.data.append.page.size | 16M | mmap sliding page size that table writer uses to append data for each column. |
cairo.writer.data.index.key.append.page.size | 512K | mmap page size for appending index key data; key data is number of distinct symbol values times 4 bytes. |
cairo.writer.data.index.value.append.page.size | 16M | mmap page size for appending value data. |
cairo.writer.misc.append.page.size | 4K | mmap page size for mapping small files, default value is OS page size (4k Linux, 64K windows, 16k OSX M1). Overriding this rounds to the nearest (greater) multiple of the OS page size. |
cairo.writer.command.queue.capacity | 32 | Maximum writer ALTER TABLE and replication command capacity. Shared between all the tables. |
cairo.writer.tick.rows.count | 1024 | Row count to check writer command queue after on busy writing, e.g. tick after X rows written. |
cairo.writer.alter.busy.wait.timeout | 500 | Maximum wait timeout in milliseconds for |
cairo.sql.column.purge.queue.capacity | 128 | Purge column version job queue. Increase the size if column version not automatically cleanup after execution of UPDATE SQL statement. Reduce to decrease initial memory footprint. |
cairo.sql.column.purge.task.pool.capacity | 256 | Column version task object pool capacity. Increase to reduce GC, reduce to decrease memory footprint. |
cairo.sql.column.purge.retry.delay | 10000 | Initial delay (ΞΌs) before re-trying purge of stale column files. |
cairo.sql.column.purge.retry.delay.multiplier | 10.0 | Multiplier used to increases retry delay with each iteration. |
cairo.sql.column.purge.retry.delay.limit | 60000000 | Delay limit (ΞΌs), upon reaching which, the re-try delay remains constant. |
cairo.sql.column.purge.retry.limit.days | 31 | Number of days purge system will continue to re-try deleting stale column files before giving up. |
cairo.volumes | - | A comma separated list of alias -> root-path pairs defining allowed volumes to be used in CREATE TABLE IN VOLUME statements. |
cairo.system.table.prefix | sys. | Prefix of the tables used for QuestDB internal data storage. These tables are hidden from QuestDB web console. |
cairo.wal.enabled.default | true | Setting defining whether WAL table is the default when using |
cairo.o3.partition.split.min.size | 50MB | The estimated partition size on disk. This setting is one of the conditions to trigger auto-partitioning. |
cairo.o3.last.partition.max.splits | 20 | The number of partition pieces allowed before the last partition piece is merged back to the physical partition. |
cairo.o3.partition.purge.list.initial.capacity | 1 | Number of partition expected on average. Initial value for purge allocation job, extended in runtime automatically. |
cairo.sql.parallel.groupby.enabled | true | Enables parallel GROUP BY execution; requires at least 4 shared worker threads. |
cairo.sql.parallel.groupby.merge.shard.queue.capacity | <auto> | Merge queue capacity for parallel GROUP BY; used for parallel tasks that merge shard hash tables. |
cairo.sql.parallel.groupby.sharding.threshold | 100000 | Threshold for parallel GROUP BY to shard the hash table holding the aggregates. |
cairo.sql.groupby.allocator.default.chunk.size | 128k | Default size for memory buffers in GROUP BY function native memory allocator. |
cairo.sql.groupby.allocator.max.chunk.size | 4gb | Maximum allowed native memory allocation for GROUP BY functions. |
cairo.sql.unordered.map.max.entry.size | 24 | Threshold in bytes for switching from single memory buffer hash table (unordered) to a hash table with separate heap for entries (ordered). |
cairo.sql.window.max.recursion | 128 | Prevents stack overflow errors when evaluating complex nested SQLs. The value is an approximate number of nested SELECT clauses. |
cairo.sql.query.registry.pool.size | <auto> | Pre-sizes the internal data structure that stores active query executions. The value is chosen automatically based on the number of threads in the shared worker pool. |
cairo.sql.analytic.initial.range.buffer.size | 32 | Window function buffer size in record counts. Pre-sizes buffer for every windows function execution to contain window records. |
cairo.system.writer.data.append.page.size | 256k | mmap sliding page size that TableWriter uses to append data for each column specifically for System tables. |
WAL table configurationsβ
The following WAL tables settings on parallel threads are configurable for applying WAL data to the table storage:
Property | Default | Description |
---|---|---|
wal.apply.worker.count | equal to the CPU core count | Number of dedicated worker threads assigned to handle WAL table data. |
wal.apply.worker.affinity | equal to the CPU core count | Comma separated list of CPU core indexes. |
wal.apply.worker.haltOnError | false | Flag that indicates if the worker thread must stop when an unexpected error occurs. |
cairo.wal.purge.interval | 30000 | Period in ms of how often WAL-applied files are cleaned up from the disk |
cairo.wal.segment.rollover.row.count | 200000 | Row count of how many rows are written to the same WAL segment before starting a new segment. Triggers in conjunction with |
cairo.wal.commit.squash.row.limit | 500000 | Maximum row count that can be squashed together from multiple transactions before applying to the table. A very low value can delay data visibility. |
cairo.wal.squash.uncommitted.rows.multiplier | 20.0 | Multiplier to cairo.max.uncommitted.rows to calculate the limit of rows that can be kept invisible when writing to WAL table under heavy load, when multiple transactions are to be applied. It is used to reduce the number Out-Of-Order (O3) commits when O3 commits are unavoidable by squashing multiple commits together. Setting it very low can increase O3 commit frequency and decrease the throughput. Setting it too high may cause excessive memory usage and increase the latency. |
cairo.wal.max.lag.txn.count | 20 | Maximum number of transactions that can be kept invisible when writing to WAL table. Once the number is reached, full commit occurs. If not set, defaults to the rounded value of cairo.wal.squash.uncommitted.rows.multiplier. |
CSV importβ
This section describes configuration settings for using COPY
to import large
CSV files.
Settings for COPY
:
Property | Default | Description |
---|---|---|
cairo.sql.copy.root | import | Input root directory for CSV imports via |
cairo.sql.copy.work.root | null | Temporary import file directory. Defaults to |
cairo.iouring.enabled | true | Enable or disable io_uring implementation. Applicable to newer Linux kernels only. Can be used to switch io_uring interface usage off if there's a kernel bug affecting it. |
cairo.sql.copy.buffer.size | 2 MiB | Size of read buffers used in import. |
cairo.sql.copy.log.retention.days | 3 | Number of days to keep import messages in |
cairo.sql.copy.max.index.chunk.size | 100M | Maximum size of index chunk file used to limit total memory requirements of import. Indexing phase should use roughly |
cairo.sql.copy.queue.capacity | 32 | Size of copy task queue. Should be increased if there's more than 32 import workers. |
CSV import configuration for Dockerβ
For QuestDB instances using Docker:
cairo.sql.copy.root
must be defined using one of the following settings:- The environment variable
QDB_CAIRO_SQL_COPY_ROOT
. - The
cairo.sql.copy.root
inserver.conf
.
- The environment variable
- The path for the source CSV file is mounted.
- The source CSV file path and the path defined by
QDB_CAIRO_SQL_COPY_ROOT
are identical. - It is optional to define
QDB_CAIRO_SQL_COPY_WORK_ROOT
.
The following is an example command to start a QuestDB instance on Docker, in order to import a CSV file:
docker run -p 9000:9000 \
-v "/tmp/questdb:/var/lib/questdb" \
-v "/tmp/questdb/my_input_root:/var/lib/questdb/questdb_import" \
-e QDB_CAIRO_SQL_COPY_ROOT=/var/lib/questdb/questdb_import \
questdb/questdb
Where:
-v "/tmp/questdb/my_input_root:/var/lib/questdb/questdb_import"
: Defining a source CSV file location to be/tmp/questdb/my_input_root
on local machine and mounting it to/var/lib/questdb/questdb_import
in the container.-e QDB_CAIRO_SQL_COPY_ROOT=/var/lib/questdb/questdb_import
: Defining the copy root directory to be/var/lib/questdb/questdb_import
.
It is important that the two path are identical
(/var/lib/questdb/questdb_import
in the example).
Parallel SQL executionβ
This section describes settings that can affect the level of parallelism during SQL execution, and therefore can also have an impact on performance.
Property | Default | Description |
---|---|---|
cairo.sql.parallel.filter.enabled | true | Enable or disable parallel SQL filter execution. JIT compilation takes place only when this setting is enabled. |
cairo.sql.parallel.filter.pretouch.enabled | true | Enable column pre-touch as part of the parallel SQL filter execution, to improve query performance for large tables. |
cairo.page.frame.shard.count | 4 | Number of shards for both dispatch and reduce queues. Shards reduce queue contention between SQL statements that are executed concurrently. |
cairo.page.frame.reduce.queue.capacity | 64 | Reduce queue is used for data processing and should be large enough to supply tasks for worker threads (shared worked pool). |
cairo.page.frame.rowid.list.capacity | 256 | Row ID list initial capacity for each slot of the reduce queue. Larger values reduce memory allocation rate, but increase minimal RSS size. |
cairo.page.frame.column.list.capacity | 16 | Column list capacity for each slot of the reduce queue. Used by JIT-compiled filter functions. Larger values reduce memory allocation rate, but increase minimal RSS size. |
Postgres wire protocolβ
This section describes configuration settings for client connections using PostgresSQL wire protocol.
Property | Default | Description |
---|---|---|
pg.enabled | true | Configuration for enabling or disabling the Postres interface. |
pg.net.bind.to | 0.0.0.0:8812 | IP address and port of Postgres wire protocol server. 0 means that the server will bind to all network interfaces. You can specify IP address of any individual network interface on your system. |
pg.net.connection.limit | 64 | The maximum number permitted for simultaneous Postgres connections to the server. This value is intended to control server memory consumption. |
pg.net.connection.timeout | 300000 | Connection idle timeout in milliseconds. Connections are closed by the server when this timeout lapses. |
pg.net.connection.rcvbuf | -1 | Maximum send buffer size on each TCP socket. If value is -1 socket send buffer remains unchanged from OS default. |
pg.net.connection.sndbuf | -1 | Maximum receive buffer size on each TCP socket. If value is -1, the socket receive buffer remains unchanged from OS default. |
pg.net.connection.hint | false | Windows specific flag to overcome OS limitations on TCP backlog size |
pg.net.connection.queue.timeout | 300000 | Amount of time in milliseconds a connection can wait in the listen backlog queue before it is refused. Connections will be aggressively removed from the backlog until the active connection limit is breached. |
pg.security.readonly | false | Forces PostgreSQL Wire Protocol read only mode when |
pg.character.store.capacity | 4096 | Size of the CharacterStore. |
pg.character.store.pool.capacity | 64 | Size of the CharacterStore pool capacity. |
pg.connection.pool.capacity | 64 | The maximum amount of pooled connections this interface may have. |
pg.password | quest | Postgres database password. |
pg.user | admin | Postgres database username. |
pg.readonly.user.enabled | false | Enable or disable Postgres database read-only user account. When enabled, this additional user can be used to open read-only connections to the database. |
pg.readonly.password | quest | Postgres database read-only user password. |
pg.readonly.user | user | Postgres database read-only user username. |
pg.select.cache.enabled | true | Enable or disable the SELECT query cache. Cache capacity is |
pg.select.cache.block.count | 16 | Number of blocks to cache SELECT query execution plan against text to speed up execution. |
pg.select.cache.row.count | 16 | Number of rows to cache for SELECT query execution plan against text to speed up execution. |
pg.insert.cache.enabled | true | Enable or disable the INSERT query cache. Cache capacity is |
pg.insert.cache.block.count | 8 | Number of blocks to cache INSERT query execution plan against text to speed up execution. |
pg.insert.cache.row.count | 8 | Number of rows to cache for INSERT query execution plan against text to speed up execution. |
pg.update.cache.enabled | true | Enable or disable the UPDATE query cache. Cache capacity is |
pg.update.cache.block.count | 8 | Number of blocks to cache UPDATE query execution plan against text to speed up execution. |
pg.update.cache.row.count | 8 | Number of rows to cache for UPDATE query execution plan against text to speed up execution. |
pg.max.blob.size.on.query | 512k | For binary values, clients will receive an error when requesting blob sizes above this value. |
pg.recv.buffer.size | 1M | Size of the buffer for receiving data. |
pg.send.buffer.size | 1M | Size of the buffer for sending data. |
pg.date.locale | en | The locale to handle date types. |
pg.timestamp.locale | en | The locale to handle timestamp types. |
pg.worker.count | 0 | Number of dedicated worker threads assigned to handle PostgreSQL Wire Protocol queries. When |
pg.worker.affinity | Comma-separated list of thread numbers which should be pinned for Postgres ingestion. Example | |
pg.halt.on.error | false | Whether ingestion should stop upon internal error. |
pg.daemon.pool | true | Defines whether to run all PostgreSQL Wire Protocol worker threads in daemon mode ( |
pg.binary.param.count.capacity | 2 | Size of the initial capacity for the pool used for binary bind variables. |
InfluxDB Line Protocol (ILP)β
This section describes ingestion settings for incoming messages using InfluxDB Line Protocol.
Property | Default | Description |
---|---|---|
line.default.partition.by | DAY | Table partition strategy to be used with tables that are created automatically by InfluxDB Line Protocol. Possible values are: HOUR , DAY , WEEK , MONTH , and YEAR . |
line.auto.create.new.columns | true | When enabled, automatically creates new columns when they appear in the ingested data. When disabled, messages with new columns will be rejected. |
line.auto.create.new.tables | true | When enabled, automatically creates new tables when they appear in the ingested data. When disabled, messages for non-existent tables will be rejected. |
HTTP specific settingsβ
ILP over HTTP is the preferred way of ingesting data.
Property | Default | Description |
---|---|---|
line.http.enabled | true | Enable ILP over HTTP. Default port is 9000. Enabled by default within open source versions, defaults to false and must be enabled for Enterprise. |
line.http.ping.version | v2.2.2 | Version information for the ping response of ILP over HTTP. |
HTTP properties | Various | See HTTP settings for general HTTP configuration. ILP over HTTP inherits from HTTP settings. |
TCP specific settingsβ
Property | Default | Description |
---|---|---|
line.tcp.enabled | true | Enable or disable line protocol over TCP. |
line.tcp.net.bind.to | 0.0.0.0:9009 | IP address of the network interface to bind listener to and port. By default, TCP receiver listens on all network interfaces. |
line.tcp.net.connection.limit | 256 | The maximum number permitted for simultaneous connections to the server. This value is intended to control server memory consumption. |
line.tcp.net.connection.timeout | 300000 | Connection idle timeout in milliseconds. Connections are closed by the server when this timeout lapses. |
line.tcp.net.connection.hint | false | Windows specific flag to overcome OS limitations on TCP backlog size |
line.tcp.net.connection.rcvbuf | -1 | Maximum buffer receive size on each TCP socket. If value is -1, the socket receive buffer remains unchanged from OS default. |
line.tcp.net.connection.queue.timeout | 5000 | Amount of time in milliseconds a connection can wait in the listen backlog queue before its refused. Connections will be aggressively removed from the backlog until the active connection limit is breached. |
line.tcp.auth.db.path | Path which points to the authentication db file. | |
line.tcp.connection.pool.capacity | 64 | The maximum amount of pooled connections this interface may have. |
line.tcp.timestamp | n | Input timestamp resolution. Possible values are |
line.tcp.msg.buffer.size | 32768 | Size of the buffer read from queue. Maximum size of write request, regardless of the number of measurements. |
line.tcp.maintenance.job.interval | 1000 | Maximum amount of time (in milliseconds) between maintenance jobs committing any uncommitted data on inactive tables. |
line.tcp.min.idle.ms.before.writer.release | 500 | Minimum amount of idle time (in milliseconds) before a table writer is released. |
line.tcp.commit.interval.fraction | 0.5 | Commit lag fraction. Used to calculate commit interval for the table according to the following formula: |
line.tcp.commit.interval.default | 1000 | Default commit interval in milliseconds. |
line.tcp.max.measurement.size | 32768 | Maximum size of any measurement. |
line.tcp.writer.queue.size | 128 | Size of the queue between network I/O and writer jobs. Each queue entry represents a measurement. |
line.tcp.writer.worker.count | Number of dedicated I/O worker threads assigned to write data to tables. When | |
line.tcp.writer.worker.affinity | Comma-separated list of thread numbers which should be pinned for line protocol ingestion over TCP. CPU core indexes are 0-based. | |
line.tcp.writer.worker.sleep.threshold | 1000 | Amount of subsequent loop iterations with no work done before the worker goes to sleep. |
line.tcp.writer.worker.yield.threshold | 10 | Amount of subsequent loop iterations with no work done before the worker thread yields. |
line.tcp.writer.queue.capacity | 128 | Size of the queue between the IO jobs and the writer jobs, each queue entry represents a measurement. |
line.tcp.writer.halt.on.error | false | Flag that indicates if the worker thread must stop when an unexpected error occurs. |
line.tcp.io.worker.count | Number of dedicated I/O worker threads assigned to parse TCP input. When | |
line.tcp.io.worker.affinity | Comma-separated list of thread numbers which should be pinned for line protocol ingestion over TCP. CPU core indexes are 0-based. | |
line.tcp.io.worker.sleep.threshold | 1000 | Amount of subsequent loop iterations with no work done before the worker goes to sleep. |
line.tcp.io.worker.yield.threshold | 10 | Amount of subsequent loop iterations with no work done before the worker thread yields. |
line.tcp.disconnect.on.error | true | Disconnect TCP socket that sends malformed messages. |
line.tcp.acl.enabled | true | Enable or disable Access Control List (ACL) authentication for InfluxDB Line Protocol over TCP. Enterprise only. |
UDP specific settingsβ
The UDP receiver is deprecated since QuestDB version 6.5.2. We recommend ILP over HTTP instead, or less frequently ILP over TCP.
Property | Default | Description |
---|---|---|
line.udp.join | 232.1.2.3 | Multicast address receiver joins. This values is ignored when receiver is in "unicast" mode. |
line.udp.bind.to | 0.0.0.0:9009 | IP address of the network interface to bind listener to and port. By default UDP receiver listens on all network interfaces. |
line.udp.commit.rate | 1000000 | For packet bursts the number of continuously received messages after which receiver will force commit. Receiver will commit irrespective of this parameter when there are no messages. |
line.udp.msg.buffer.size | 2048 | Buffer used to receive single message. This value should be roughly equal to your MTU size. |
line.udp.msg.count | 10000 | Only for Linux. On Linux, QuestDB will use the |
line.udp.receive.buffer.size | 8388608 | UDP socket buffer size. Larger size of the buffer will help reduce message loss during bursts. |
line.udp.enabled | false | Enable or disable UDP receiver. |
line.udp.own.thread | false | When |
line.udp.own.thread.affinity | -1 | -1 does not set thread affinity. OS will schedule thread and it will be liable to run on random cores and jump between the. 0 or higher pins thread to give core. This property is only valid when UDP receiver uses own thread. |
line.udp.unicast | false | When |
line.udp.timestamp | n | Input timestamp resolution. Possible values are |
line.udp.commit.mode | nosync | Commit durability. Available values are |
Database replicationβ
Replication is Enterprise only.
Replication enables high availability clusters.
For setup instructions, see the replication operations guide.
For an overview of the concept, see the replication concept page.
For a tuning guide see... the replication tuning guide.
Property | Default | Description |
---|---|---|
replication.role | none | Defaults to |
replication.object.store | A configuration string that allows connecting to an object store. The format is scheme::key1=value;key2=value2;β¦. The various keys and values are detailed in a later section. Ignored if replication is disabled. No default given variability. | |
cairo.wal.segment.rollover.size | 2097152 | The size of the WAL segment before it is rolled over. Default is |
cairo.writer.command.queue.capacity | 32 | Maximum writer ALTER TABLE and replication command capacity. Shared between all the tables. |
replication.primary.throttle.window.duration | 10000 | The millisecond duration of the sliding window used to process replication batches. Default is |
replication.requests.max.concurrent | 0 | A limit to the number of concurrent object store requests. The default is |
replication.requests.retry.attempts | 3 | Maximum number of times to retry a failed object store request before logging an error and reattempting later after a delay. Default is |
replication.requests.retry.interval | 200 | How long to wait before retrying a failed operation. Default is |
replication.primary.compression.threads | calculated | Max number of threads used to perform file compression operations before uploading to the object store. The default value is calculated as half the number of CPU cores. |
replication.primary.compression.level | 1 | Zstd compression level. Defaults to |
replication.replica.poll.interval | 1000 | Millisecond polling rate of a replica instance to check for the availability of new changes. |
native.async.io.threads | cpuCount | The number of async (network) io threads used for replication (and in the future cold storage). The default should be appropriate for most use cases. |
native.max.blocking.threads | cpuCount * 4 | Maximum number of threads for parallel blocking disk IO read/write operations for replication (and other). These threads are ephemeral: They are spawned per need and shut down after a short duration if no longer in use. These are not cpu-bound threads, hence the relative large number. The default should be appropriate for most use cases. |
Identity and Access Management (IAM)β
Identity and Access Management is available within QuestDB Enterprise.
Identity and Access Management (IAM) ensures that data can be accessed only by authorized users. The below configuration properties relate to various authentication and authorization features.
For a full explanation of IAM, see the Identity and Access Management (IAM) documentation.
Property | Default | Description |
---|---|---|
acl.enabled | true | Enables/disables Identity and Access Management. |
acl.admin.user.enabled | true | Enables/disables the built-in admin user. |
acl.admin.user | admin | Name of the built-in admin user. |
acl.admin.password | quest | The password of the built-in admin user. |
acl.basic.auth.realm.enabled | false | When enabled the browser's basic auth popup window is used instead of the Web Console's login screen. Only present for backwards compatibility. |
acl.entity.name.max.length | 255 | Maximum length of user, group and service account names. |
acl.password.hash.iteration.count | 100000 | QuestDB Enterprise never stores passwords in plain text, it stores password hashes only. This is the number of hash iterations used in password hashing. Higher means safer, almost never should be changed. |
acl.rest.token.refresh.threshold | 10 | When a REST token is created in REFRESH mode, its TTL is extended on every successful authentication, unless the last successful authentication was within this threshold. This setting removes unnecessary overhead of continuously refreshing REST tokens if they are used often. The value is expressed in seconds. |
tls.enabled | false | Enables/disables TLS encryption globally for all QuestDB interfaces (HTTP endpoints, ILP over TCP). |
tls.cert.path | Path to certificate used for TLS encryption globally. The certificate should be DER-encoded and saved in PEM format. | |
tls.private.key.path | Path to private key used for TLS encryption globally. | |
http.tls.enabled | false | Enables/disables TLS encryption for the HTTP server only. |
http.tls.cert.path | Path to certificate used for TLS encryption for the HTTP server only. The certificate should be DER-encoded and saved in PEM format. | |
http.tls.private.key.path | Path to private key used for TLS encryption for the HTTP server only. | |
http.min.tls.enabled | false | Enables/disables TLS encryption for the minimal HTTP server only. |
http.min.tls.cert.path | Path to certificate used for TLS encryption for the minimal HTTP server only. The certificate should be DER-encoded and saved in PEM format. | |
http.min.tls.private.key.path | Path to private key used for TLS encryption for the minimal HTTP server only. | |
line.tcp.tls.enabled | false | Enables/disables TLS encryption for ILP over TCP only. |
line.tcp.tls.cert.path | Path to certificate used for TLS encryption for ILP over TCP only. The certificate should be DER-encoded and saved in PEM format. | |
line.tcp.tls.private.key.path | Path to private key used for TLS encryption for ILP over TCP only. | |
line.tcp.acl.enabled | true | Enables/disables authentication for the ILP over TCP endpoint only. |
OpenID Connect (OIDC)β
OpenID Connect is Enterprise and Cloud only.
OpenID Connect (OIDC) support is part of QuestDB's Identity and Access Management. The database can be integrated with any OAuth2/OIDC Identity Provider (IdP).
For detailed information about OIDC, see the OpenID Connect (OIDC) integration guide.
Property | Default | Description |
---|---|---|
acl.oidc.enabled | false | Enables/disables OIDC authentication. When enabled, few other configuration options must also be set. |
acl.oidc.pkce.enabled | true | Enables/disables PKCE for the Authorization Code Flow. This should always be enabled in a production environment, the Web Console is not fully secure without it. |
acl.oidc.ropc.flow.enabled | false | Enables/disables Resource Owner Password Credentials flow. When enabled, this flow also has to be configured in the OIDC Provider. |
acl.oidc.configuration.url | URL where the OpenID Provider's configuration information cna be loaded in json format, should always end with | |
acl.oidc.host | OIDC provider hostname. Required when OIDC is enabled, unless the OIDC configuration URL is set. | |
acl.oidc.port | 443 | OIDC provider port number. |
acl.oidc.tls.enabled | true | Whether the OIDC provider requires a secure connection or not. It is highly unlikely in a production environment, but if the OpenID Provider endpoints do not require a secure connection, this option can be set to |
acl.oidc.tls.validation.enabled | true | Enables/disables TLS certificate validation. If you are working with self-signed certificates that you would like QuestDB to trust, disable this option. Validation is strongly recommended in production environments. QuestDB will check that the certificate is valid, and that it is issued for the server to which it connects. |
acl.oidc.tls.keystore.path | Path to a keystore file that contains trusted Certificate Authorities. Will be used when validating the certificate of the OIDC provider. Not required if your OIDC provider's certificate is signed by a public CA. | |
acl.oidc.tls.keystore.password | Keystore password, required if there is a keystore file and it is password protected. | |
acl.oidc.http.timeout | 30000 | OIDC provider HTTP request timeout in milliseconds. |
acl.oidc.client.id | Client name assigned to QuestDB in the OIDC server, required when OIDC is enabled. | |
acl.oidc.audience | OAuth2 audience as set on the tokens issued by the OIDC Provider, defaults to the client id. | |
acl.oidc.redirect.uri | The redirect URI tells the OIDC server where to redirect the user after successful authentication. If not set, the Web Console defaults it to the location where it was loaded from ( | |
acl.oidc.scope | openid | The OIDC server should ask consent for the list of scopes provided in this property. The scope |
acl.oidc.public.keys.endpoint | /pf/JWKS | JSON Web Key Set (JWKS) Endpoint, the default value should work for the Ping Identity Platform. This endpoint provides the list of public keys can be used to decode and validate ID tokens issued by the OIDC Provider. |
acl.oidc.authorization.endpoint | /as/authorization.oauth2 | OIDC Authorization Endpoint, the default value should work for the Ping Identity Platform. |
acl.oidc.token.endpoint | /as/token.oauth2 | OIDC Token Endpoint, the default value should work for the Ping Identity Platform. |
acl.oidc.userinfo.endpoint | /idp/userinfo.openid | OIDC User Info Endpoint, the default value should work for the Ping Identity Platform. Used to retrieve additional user information which contains the user's group memberships. |
acl.oidc.groups.encoded.in.token | false | Should be set to false, if the OIDC Provider is configured to encode the group memberships of the user into the id token. When set to true, QuestDB will look for the groups in the token instead of calling the User Info endpoint. |
acl.oidc.sub.claim | sub | The name of the claim in the user information, which contains the name of the user. Could be a username, the user's full name or email. It will be displayed in the Web Console, and logged for audit purposes. |
acl.oidc.groups.claim | groups | The name of the custom claim in the user information, which contains the group memberships of the user. |
acl.oidc.cache.ttl | 30000 | User info cache entry TTL (time to live) in milliseconds, default value is 30 seconds. For improved performance QuestDB caches user info responses for each valid access token, this settings drives how often the access token should be validated and the user info updated. |
Config Validationβ
The database startup phase checks for configuration issues, such as invalid or deprecated settings. Issues may be classified as advisories or errors. Advisory issues are logged without causing the database to stop its startup sequence: These are usually setting deprecation warnings. Configuration errors can optionally cause the database to fail its startup.
Property | Default | Description |
---|---|---|
config.validation.strict | false | When enabled, startup fails if there are configuration errors. |
We recommended enabling strict validation.
Telemetryβ
QuestDB sends anonymous telemetry data with information about usage which helps us improve the product over time. We do not collect any personally-identifying information, and we do not share any of this data with third parties.
Property | Default | Description |
---|---|---|
telemetry.enabled | true | Enable or disable anonymous usage metrics collection. |
telemetry.hide.tables | false | Hides telemetry tables from select * from tables() output. As a result, telemetry tables will not be visible in the Web Console table view. |
telemetry.queue.capacity | 512 | Capacity of the internal telemetry queue, which is the gateway of all telemetry events. This queue capacity does not require tweaking. |
Logging & Metricsβ
The following settings are available in server.conf
:
Property | Default | Description |
---|---|---|
log.level.verbose | false | Converts short-hand log level indicators (E, C, I) into long-hand (ERROR, CRITICAL, INFO) |
Further settings are available in log.conf
. For more information, and details of our Prometheus metrics, please visit the Logging & Metrics documentation.