This page shows how to insert data into QuestDB using different programming languages and tools.
InfluxDB Line Protocol is the recommended primary ingestion method in QuestDB and is recommended for high-performance applications.
For transactional data inserts, use the PostgreSQL wire protocol.
For operational (ad-hoc) data ingestion, the Web Console makes
it easy to upload CSV files and insert via SQL statements. You can also perform
these same actions via the HTTP REST API. For
large CSV import (database migrations), use SQL
In summary, these are the different options:
- InfluxDB Line Protocol
- High performance.
- Optional automatic timestamps.
- Optional integrated authentication.
- Client libraries in various programming languages.
- PostgreSQL wire protocol
INSERTstatements, including parameterized queries.
psqlon the command line.
- Interoperability with third-party tools and libraries.
- Web Console
- CSV upload.
COPYfor large CSV import.
- HTTP REST API
- CSV upload.
curlon the command line.
The InfluxDB Line Protocol (ILP) is a text protocol over TCP on port 9009.
It is a one-way protocol to insert data, focusing on simplicity and performance.
Here is a summary table showing how it compares with other ways to insert data that we support:
|Protocol||Record Insertion Reporting||Data Insertion Performance|
|InfluxDB Line Protocol||Server logs; Disconnect on error||Best|
|CSV upload via HTTP REST||Configurable||Very Good|
|SQL ||Transaction-level||Suitable for one-off data migration|
This interface is the preferred ingestion method as it provides the following benefits:
- High-throughput ingestion
- Robust ingestion from multiple sources into tables with dedicated systems for reducing congestion
- Configurable commit-lag for out-of-order data via server configuration settings
With sufficient client-side validation, the lack of errors to the client and confirmation isn't necessarily a concern: QuestDB will log out any issues and disconnect on error. The database will process any valid lines up to that point and insert rows.
On the InfluxDB line protocol page, you may find additional details on the message format, ports and authentication.
The Telegraf guide helps you configure a Telegraf agent to collect and send metrics to QuestDB via ILP.
The ILP client libraries provide more user-friendly ILP clients for a growing number of languages.
These examples send a few rows of input. These use client libraries as well as raw TCP socket connections, when a client library is not available.
python3 -m pip install questdb
<dependency> <groupId>org.questdb</groupId> <artifactId>questdb</artifactId> <version>6.5</version> </dependency>
compile group: 'org.questdb', name: 'questdb', version: '6.5'
NodeJS client library repo.
Providing a timestamp is optional. If one isn't provided, the server will automatically assign the server's system time as the row's timestamp value.
Timestamps are interpreted as the number of nanoseconds from 1st Jan 1970 UTC,
unless otherwise configured. See
line.tcp.timestamp configuration options.
Strings may be recorded as either the
STRING type or the
Inspecting a sample ILP we can see how a space
' ' separator splits
columns to the left from the rest of the columns.
In this example, columns
col2 are strings written to the database
col3 is written out as a
SYMBOLs are strings with which are automatically
interned by the database on a
per-column basis. You should use this type if you expect the string to be
re-used over and over, such as is common with identifiers.
For one-off strings use
STRING columns which aren't interned.
QuestDB types are a superset of those supported by ILP. This means that when sending data you should be aware of the performed conversions.
Different library implementations will perform different degrees content validation upfront before sending messages out. To avoid encountering issues, follow these guidelines:
All strings must be UTF-8 encoded.
Columns should only appear once per row.
Symbol columns must be written out before other columns.
Table and column names can't have invalid characters. These should not contain
\0(nul terminator), ZERO WIDTH NO-BREAK SPACE.
Write timestamp column via designated API, or at the end of the message if you are using raw sockets. If you have multiple timestamp columns write additional ones as column values.
Don't change column type between rows.
Supply timestamps in order. These need to be at least equal to previous ones in the same table, unless using the out of order feature. This is not necessary if you use the out-of-order feature.
QuestDB will always log any ILP errors in its server logs.
Here is an example error from the server logs caused when a line attempted to
STRING into a
To insert a NULL value, skip the column (or symbol) for that row.
Will insert as:
If you don't see your inserted data, this is usually down to one of two things:
You prepared the messages, but forgot to call
.flush()or similar in your client library, so no data was sent.
The internal timers and buffers within QuestDB did not commit the data yet. For development (and development only), you may want to tweak configuration settings to commit data more frequently.server.confcairo.max.uncommitted.rows=1
Refer to ILP's commit strategy documentation for more on these configuration settings.
ILP can additionally provide authentication. This is an optional feature which is documented here.
Use our own client libraries and/or protocol documentation: Clients intended to work with InfluxDB will not work with QuestDB.
QuestDB also supports the same wire protocol as PostgreSQL, allowing you to connect and query the database with various third-party pre-existing client libraries and tools.
You can connect to TCP port
8812 and use both
PostgreSQL wire protocol is better suited for applications inserting via SQL programmatically as it provides parameterized queries, which avoid SQL injection issues.
InfluxDB Line Protocol is the recommended primary
ingestion method in QuestDB. SQL
INSERT statements over the PostgreSQL offer
feedback and error reporting, but have worse overall performance.
Here are a few examples demonstrating SQL
Create the table:
Note that you can also run
psql from Docker without installing the client
This example uses naive
Date.now() * 1000 inserts for Timestamp types in
microsecond resolution. For accurate microsecond timestamps, the
call can be used.
The following example shows how to use parameterized queries and prepared statements using the rust-postgres client.
QuestDB ships with an embedded Web Console running
by default on port
SQL statements can be written in the code editor and executed by clicking the Run button. Note that the web console runs a single statement at a time.
For inserting bulk data or migrating data from other databases, see large CSV import.
QuestDB exposes a REST API for compatibility with a wide range of libraries and
tools. The REST API is accessible on port
9000 and has the following
|Entrypoint||HTTP Method||Description||API Docs|
|POST||Import CSV data||Reference|
|GET||Run SQL Query returning JSON result set||Reference|
For details such as content type, query parameters and more, refer to the REST API docs.
/imp: Uploading Tabular Data#
Let's assume you want to upload the following data via the
You can do so via the command line using
cURL or programmatically via HTTP
APIs in your scripts and applications.
By default, the response is designed to be human-readable. Use the
query argument to obtain a response in JSON. You can also specify the schema
explicitly. See the second example in Python for these features.
This example imports a CSV file with automatic schema detection.
This example overwrites an existing table and specifies a timestamp format and a designated timestamp column. For more information on the optional parameters to specify timestamp formats, partitioning and renaming tables, see the REST API documentation.
This first example shows uploading the
data.csv file with automatic schema
The second example creates a CSV buffer from Python objects and uploads them with a custom schema. Note UTF-8 encoding.
fmt=json parameter allows us to obtain a parsable response, rather than a
tabular response designed for human consumption.
/exec entrypoint takes a SQL query and returns results as JSON.
We can use this for quick SQL inserts too, but note that there's no support for parameterized queries that are necessary to avoid SQL injection issues.
node-fetch package can be installed using
npm i node-fetch.