Insert data

This page shows how to insert data into QuestDB using different programming languages and tools.

Overview#

QuestDB supports the following data ingestion methods:

  • InfluxDB Line Protocol: the recommended primary ingestion method in QuestDB for high-performance applications.

  • PostgreSQL wire protocol: interoperability with the PostgreSQL ecosystem.

    • SQL INSERT and COPY statements, including parameterized queries.
    • psql on the command line
    • Support for most PostgreSQL keywords and functions
  • HTTP REST API: compatibility with a wide range of libraries and tools.

    • SQL INSERT for ad-hoc SQL queries
    • curl command and CSV file upload on the commend line
    • Accessing QuestDB via the Web Console:

Recommended insert method#

The table below outlines the general recommendation for data ingestion based on the shape of the data and different scenarios:

One-off data import#

CSV UploadSQL COPY
(Web Console)
InfluxDB Line ProtocolPostgreSQL
Sortedโœ“โœ“
Lightly out of orderโœ“
Heavily out of orderโœ“

Periodic batch ingest#

CSV UploadSQL COPY
(Web Console)
InfluxDB Line ProtocolPostgreSQL
Sortedโœ“โœ“โœ“
Lightly out of orderโœ“โœ“โœ“
Heavily out of orderโœ“โœ“โœ“

Real-time ingest#

CSV UploadSQL COPY
(Web Console)
InfluxDB Line ProtocolPostgreSQL
Sortedโœ“
Lightly out of orderโœ“
Heavily out of orderโœ“

Lightly out of order data refers to data with the following traits:

  • The expected lag is usually within a few minutes.
  • The data is mostly sorted. Timestamps are growing in time with occasional exceptions that are within the lag.

Heavily out of order data refers to data with the following traits:

  • The data is mostly unsorted.
  • The data belongs to different parts of different partitions in an arbitrary manner.

InfluxDB Line Protocol (ILP)#

The InfluxDB Line Protocol is a text protocol over TCP on port 9009.

It is a one-way protocol to insert data, focusing on simplicity and performance.

This interface is the preferred ingestion method as it provides the following benefits:

  • High-throughput ingestion
  • Robust ingestion from multiple sources into tables with dedicated systems for reducing congestion
  • Supports on-the-fly, concurrent schema changes

On the InfluxDB line protocol page, you may find additional details on the message format, ports and authentication.

Client libraries#

The Client Libraries provide user-friendly InfluxDB Line Protocol clients for a growing number of languages.

Authentication#

By default, Open Source InfluxDB Line Protocol Server is unauthenticated. To configure authentication on the server, follow our server configuration guide. To configure authentication on the client, follow the relevant documentation section in the Client Libraries overview.

QuestDB Cloud servers are configured for authentication already. Snippets for all the supported languages can be found at https://cloud.questdb.com under the instance "Connect" tab.

Examples#

These examples send a few rows of input. These use client libraries as well as raw TCP socket connections, when a client library is not available.

Python client library docs and repo.

See more examples, including ingesting data from Pandas dataframes.

python3 -m pip install questdb
from questdb.ingress import Sender, IngressError, TimestampNanos
import sys
import datetime
def example(host: str = 'localhost', port: int = 9009):
try:
with Sender(host, port) as sender:
# Record with provided designated timestamp (using the 'at' param)
# Notice the designated timestamp is expected in Nanoseconds,
# but timestamps in other columns are expected in Microseconds.
# The API provides convenient functions
sender.row(
'trades',
symbols={
'pair': 'USDGBP',
'type': 'buy'},
columns={
'traded_price': 0.83,
'limit_price': 0.84,
'qty': 100,
'traded_ts': datetime.datetime(
2022, 8, 6, 7, 35, 23, 189062,
tzinfo=datetime.timezone.utc)},
at=TimestampNanos.now())
# If no 'at' param is passed, the server will use its own timestamp.
sender.row(
'trades',
symbols={'pair': 'EURJPY'},
columns={
'traded_price': 135.97,
'qty': 400,
'limit_price': None}) # NULL columns can be passed as None,
# or simply be left out.
# We recommend flushing periodically, for example every few seconds.
# If you don't flush explicitly, the client will flush automatically
# once the buffer is reaches 63KiB and just before the connection
# is closed.
sender.flush()
except IngressError as e:
sys.stderr.write(f'Got error: {e}\n')
if __name__ == '__main__':
example()

Telegraf#

The Telegraf guide helps you configure a Telegraf agent to collect and send metrics to QuestDB via InfluxDB Line Protocol.

PostgreSQL wire protocol#

QuestDB also supports the same wire protocol as PostgreSQL, allowing you to connect and query the database with various third-party pre-existing client libraries and tools.

You can connect to TCP port 8812 and use both INSERT and SELECT SQL queries.

PostgreSQL wire protocol is better suited for applications inserting via SQL programmatically as it provides parameterized queries, which avoid SQL injection issues.

tip

InfluxDB Line Protocol is the recommended primary ingestion method in QuestDB. SQL INSERT statements over the PostgreSQL offer feedback and error reporting, but have worse overall performance.

Here are a few examples demonstrating SQL INSERT queries:

note

If you using the QuestDB Cloud, your database requires TLS to connect. You can find host, port, and password configuration at https://cloud.questdb.com, on your instance "Connect" tab. To enable SSL from psql in the commands below, please follow this pattern:

psql -h {hostname} -p {port} -U admin "dbname=qdb sslmode=require" -c '{SQL_STATEMENT}'

Create the table:

psql -h localhost -p 8812 -U admin -d qdb \
-c "CREATE TABLE IF NOT EXISTS t1 (name STRING, value INT);"

Insert row:

psql -h localhost -p 8812 -U admin -d qdb -c "INSERT INTO t1 VALUES('a', 42)"

Query back:

psql -h localhost -p 8812 -U admin -d qdb -c "SELECT * FROM t1"

Note that you can also run psql from Docker without installing the client locally:

docker run -it --rm --network=host -e PGPASSWORD=quest \
postgres psql ....

Web Console#

QuestDB ships with an embedded Web Console running by default on port 9000.

Inserting data via SQL#

SQL statements can be written in the code editor and executed by clicking the Run button. Note that the Web Console runs a single statement at a time.

There are two SQL keywords to insert data:

  • INSERT:

    Creating a table and inserting some data
    CREATE TABLE takeaway_order (ts TIMESTAMP, id SYMBOL, status SYMBOL)
    TIMESTAMP(ts);
    INSERT INTO takeaway_order VALUES (now(), 'order1', 'placed');
    INSERT INTO takeaway_order VALUES (now(), 'order2', 'placed');
  • COPY: For inserting bulk data or migrating data from other databases. See large CSV import.

Uploading CSV file#

It is also possible to upload CSV files using the Import tab in the Web Console:

Screenshot of the UI for import

HTTP REST API#

QuestDB exposes a REST API for compatibility with a wide range of libraries and tools. The REST API is accessible on port 9000 and has the following insert-capable entrypoints:

EntrypointHTTP MethodDescriptionAPI Docs
/impPOSTImport CSV dataReference
/exec?query=..GETRun SQL Query returning JSON result setReference

For details such as content type, query parameters and more, refer to the REST API docs.

/imp: Uploading Tabular Data#

tip

InfluxDB Line Protocol is the recommended primary ingestion method in QuestDB. CSV uploading offers insertion feedback and error reporting, but has worse overall performance.

See /imp's atomicity query parameter to customize behavior on error.

Let's assume you want to upload the following data via the /imp entrypoint:

data.csv
col1,col2,col3
a,10.5,True
b,100,False
c,,True

You can do so via the command line using cURL or programmatically via HTTP APIs in your scripts and applications.

By default, the response is designed to be human-readable. Use the fmt=json query argument to obtain a response in JSON. You can also specify the schema explicitly. See the second example in Python for these features.

This example imports a CSV file with automatic schema detection.

Basic import with table name
curl -F data=@data.csv http://localhost:9000/imp?name=table_name

This example overwrites an existing table and specifies a timestamp format and a designated timestamp column. For more information on the optional parameters to specify timestamp formats, partitioning and renaming tables, see the REST API documentation.

Providing a user-defined schema
curl \
-F schema='[{"name":"ts", "type": "TIMESTAMP", "pattern": "yyyy-MM-dd - HH:mm:ss"}]' \
-F data=@weather.csv 'http://localhost:9000/imp?overwrite=true&timestamp=ts'

/exec: SQL INSERT Query#

The /exec entrypoint takes a SQL query and returns results as JSON.

We can use this for quick SQL inserts too, but note that there's no support for parameterized queries that are necessary to avoid SQL injection issues.

tip

Prefer the PostgreSQL interface if you are generating sql programmatically.

Prefer InfluxDB Line Protocol if you need high-performance inserts.

# Create Table
curl -G \
--data-urlencode "query=CREATE TABLE IF NOT EXISTS trades(name STRING, value INT)" \
http://localhost:9000/exec
# Insert a row
curl -G \
--data-urlencode "query=INSERT INTO trades VALUES('abc', 123456)" \
http://localhost:9000/exec

โญ Something missing? Page not helpful? Please suggest an edit on GitHub.