REST API

The QuestDB REST API is based on standard HTTP features and is understood by off-the-shelf HTTP clients. It provides a simple way to interact with QuestDB and is compatible with most programming languages. API functions are fully keyed on the URL and they use query parameters as their arguments.

The Web Console is the official Web client relying on the REST API. Find out more in the section using the Web Console.

Available methods

  • /imp for importing data from .CSV files
  • /exec to execute a SQL statement
  • /exp to export data

Examples

QuestDB exposes a REST API for compatibility with a wide range of libraries and tools. The REST API is accessible on port 9000 and has the following insert-capable entrypoints:

EntrypointHTTP MethodDescriptionAPI Docs
/impPOSTImport CSV dataReference
/exec?query=..GETRun SQL Query returning JSON result setReference

For details such as content type, query parameters and more, refer to the REST API docs.

/imp: Uploading Tabular Data

Let's assume you want to upload the following data via the /imp entrypoint:

col1,col2,col3
a,10.5,True
b,100,False
c,,True

You can do so via the command line using cURL or programmatically via HTTP APIs in your scripts and applications.

By default, the response is designed to be human-readable. Use the fmt=json query argument to obtain a response in JSON. You can also specify the schema explicitly. See the second example in Python for these features.

This example imports a CSV file with automatic schema detection.

Basic import with table name
curl -F data=@data.csv http://localhost:9000/imp?name=table_name

This example overwrites an existing table and specifies a timestamp format and a designated timestamp column. For more information on the optional parameters to specify timestamp formats, partitioning and renaming tables, see the REST API documentation.

Providing a user-defined schema
curl \
-F schema='[{"name":"ts", "type": "TIMESTAMP", "pattern": "yyyy-MM-dd - HH:mm:ss"}]' \
-F data=@weather.csv 'http://localhost:9000/imp?overwrite=true&timestamp=ts'

/exec: SQL INSERT Query

The /exec entrypoint takes a SQL query and returns results as JSON.

We can use this for quick SQL inserts too, but note that there's no support for parameterized queries that are necessary to avoid SQL injection issues. Prefer InfluxDB Line Protocol if you need high-performance inserts.

# Create Table
curl -G \
--data-urlencode "query=CREATE TABLE IF NOT EXISTS trades(name STRING, value INT)" \
http://localhost:9000/exec

# Insert a row
curl -G \
--data-urlencode "query=INSERT INTO trades VALUES('abc', 123456)" \
http://localhost:9000/exec

/imp - Import data

/imp streams tabular text data directly into a table. It supports CSV, TAB and pipe (|) delimited inputs with optional headers. There are no restrictions on data size. Data types and structures are detected automatically, without additional configuration. In some cases, additional configuration can be provided to improve the automatic detection as described in user-defined schema.

note

The structure detection algorithm analyses the chunk in the beginning of the file and relies on relative uniformity of data. When the first chunk is non-representative of the rest of the data, automatic imports can yield errors.

If the data follows a uniform pattern, the number of lines which are analyzed for schema detection can be reduced to improve performance during uploads using the http.text.analysis.max.lines key. Usage of this setting is described in the HTTP server configuration documentation.

URL parameters

/imp is expecting an HTTP POST request using the multipart/form-data Content-Type with following optional URL parameters which must be URL encoded:

ParameterRequiredDefaultDescription
atomicityNoskipColabort, skipRow or skipCol. Behaviour when an error is detected in the data. abort: the entire file will be skipped. skipRow: the row is skipped. skipCol: the column is skipped.
delimiterNoURL encoded delimiter character. When set, import will try to detect the delimiter automatically. Since automatic delimiter detection requires at least two lines (rows) to be present in the file, this parameter may be used to allow single line file import.
fmtNotabularCan be set to json to get the response formatted as such.
forceHeaderNofalsetrue or false. When false, QuestDB will try to infer if the first line of the file is the header line. When set to true, QuestDB will expect that line to be the header line.
nameNoName of the fileName of the table to create, see below.
overwriteNofalsetrue or false. When set to true, any existing data or structure will be overwritten.
partitionByNoNONESee partitions.
o3MaxLagNoSets upper limit on the created table to be used for the in-memory out-of-order buffer. Can be also set globally via the cairo.o3.max.lag configuration property.
maxUncommittedRowsNoMaximum number of uncommitted rows to be set for the created table. When the number of pending rows reaches this parameter on a table, a commit will be issued. Can be also set globally via the cairo.max.uncommitted.rows configuration property.
skipLevNofalsetrue or false. Skip “Line Extra Values”, when set to true, the parser will ignore those extra values rather than ignoring entire line. An extra value is something in addition to what is defined by the header.
timestampNoName of the column that will be used as a designated timestamp.
createNotruetrue or false. When set to false, QuestDB will not automatically create a table 'name' if one does not exist, and will return an error instead.
Example usage
curl -F data=@weather.csv \
'http://localhost:9000/imp?overwrite=true&name=new_table&timestamp=ts&partitionBy=MONTH'

Further example queries with context on the source CSV file contents relative and the generated tables are provided in the examples section below.

Names

Table and column names are subject to restrictions, the following list of characters are automatically removed:

[whitespace]
.
?
,
:
\
/
\\
\0
)
(
_
+
-
*
~
%

When the header row is missing, column names are generated automatically.

Consistency guarantees

/imp benefits from the properties of the QuestDB storage model, although Atomicity and Durability can be relaxed to meet convenience and performance demands.

Atomicity

QuestDB is fully insured against any connection problems. If the server detects closed socket(s), the entire request is rolled back instantly and transparently for any existing readers. The only time data can be partially imported is when atomicity is in relaxed mode and data cannot be converted to column type. In this scenario, any "defective" row of data is discarded and /imp continues to stream request data into table.

Consistency

This property is guaranteed by consistency of append transactions against QuestDB storage engine.

Isolation

Data is committed to QuestDB storage engine at end of request. Uncommitted transactions are not visible to readers.

Durability

/imp streams data from network socket buffer directly into memory mapped files. At this point data is handed over to the OS and is resilient against QuestDB internal errors and unlikely but hypothetically possible crashes. This is default method of appending data and it is chosen for its performance characteristics.

Examples

Automatic schema detection

The following example uploads a file ratings.csv which has the following contents:

tsvisMilestempFdewpF
2010-01-01T00:00:00.000000Z8.83430
2010-01-01T00:51:00.000000Z9.1000000000003430
2010-01-01T01:36:00.000000Z8.03430
............

An import can be performed with automatic schema detection with the following request:

curl -F data=@weather.csv 'http://localhost:9000/imp'

A HTTP status code of 200 will be returned and the response will be:

+-------------------------------------------------------------------------------+
| Location: | weather.csv | Pattern | Locale | Errors |
| Partition by | NONE | | | |
| Timestamp | NONE | | | |
+-------------------------------------------------------------------------------+
| Rows handled | 49976 | | | |
| Rows imported | 49976 | | | |
+-------------------------------------------------------------------------------+
| 0 | ts | TIMESTAMP | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | INT | 0 |
+-------------------------------------------------------------------------------+

User-defined schema

To specify the schema of a table, a schema object can be provided:

curl \
-F schema='[{"name":"dewpF", "type": "STRING"}]' \
-F data=@weather.csv 'http://localhost:9000/imp'
Response
+------------------------------------------------------------------------------+
| Location: | weather.csv | Pattern | Locale | Errors |
| Partition by | NONE | | | |
| Timestamp | NONE | | | |
+------------------------------------------------------------------------------+
| Rows handled | 49976 | | | |
| Rows imported | 49976 | | | |
+------------------------------------------------------------------------------+
| 0 | ts | TIMESTAMP | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | STRING | 0 |
+------------------------------------------------------------------------------+

Non-standard timestamp formats

Given a file weather.csv with the following contents which contains a timestamp with a non-standard format:

tsvisMilestempFdewpF
2010-01-01 - 00:00:008.83430
2010-01-01 - 00:51:009.1000000000003430
2010-01-01 - 01:36:008.03430
............

The file can be imported as usual with the following request:

Importing CSV with non-standard timestamp
curl -F data=@weather.csv 'http://localhost:9000/imp'

A HTTP status code of 200 will be returned and the import will be successful, but the timestamp column is detected as a VARCHAR type:

Response with timestamp as VARCHAR type
+-------------------------------------------------------------------------------+
| Location: | weather.csv | Pattern | Locale | Errors |
| Partition by | NONE | | | |
| Timestamp | NONE | | | |
+-------------------------------------------------------------------------------+
| Rows handled | 49976 | | | |
| Rows imported | 49976 | | | |
+-------------------------------------------------------------------------------+
| 0 | ts | VARCHAR | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | INT | 0 |
+-------------------------------------------------------------------------------+

To amend the timestamp column type, this example curl can be used which has a schema JSON object to specify that the ts column is of TIMESTAMP type with the pattern yyyy-MM-dd - HH:mm:ss

Additionally, URL parameters are provided:

  • overwrite=true to overwrite the existing table
  • timestamp=ts to specify that the ts column is the designated timestamp column for this table
  • partitionBy=MONTH to set a partitioning strategy on the table by MONTH
Providing a user-defined schema
curl \
-F schema='[{"name":"ts", "type": "TIMESTAMP", "pattern": "yyyy-MM-dd - HH:mm:ss"}]' \
-F data=@weather.csv \
'http://localhost:9000/imp?overwrite=true&timestamp=ts&partitionBy=MONTH'

The HTTP status code will be set to 200 and the response will show 0 errors parsing the timestamp column:

+------------------------------------------------------------------------------+
| Location: | weather.csv | Pattern | Locale | Errors |
| Partition by | MONTH | | | |
| Timestamp | ts | | | |
+------------------------------------------------------------------------------+
| Rows handled | 49976 | | | |
| Rows imported | 49976 | | | |
+------------------------------------------------------------------------------+
| 0 | ts | TIMESTAMP | 0 |
| 1 | visMiles | DOUBLE | 0 |
| 2 | tempF | INT | 0 |
| 3 | dewpF | INT | 0 |
+------------------------------------------------------------------------------+

JSON response

If you intend to upload CSV programmatically, it's easier to parse the response as JSON. Set fmt=json query argument on the request.

Here's an example of a successful response:

{
"status": "OK",
"location": "example_table",
"rowsRejected": 0,
"rowsImported": 3,
"header": false,
"columns": [
{ "name": "col1", "type": "SYMBOL", "size": 4, "errors": 0 },
{ "name": "col2", "type": "DOUBLE", "size": 8, "errors": 0 },
{ "name": "col3", "type": "BOOLEAN", "size": 1, "errors": 0 }
]
}

Here is an example with request-level errors:

{
"status": "not enough lines [table=example_table]"
}

Here is an example with column-level errors due to unsuccessful casts:

{
"status": "OK",
"location": "example_table2",
"rowsRejected": 0,
"rowsImported": 3,
"header": false,
"columns": [
{ "name": "col1", "type": "DOUBLE", "size": 8, "errors": 3 },
{ "name": "col2", "type": "SYMBOL", "size": 4, "errors": 0 },
{ "name": "col3", "type": "BOOLEAN", "size": 1, "errors": 0 }
]
}

/exec - Execute queries

/exec compiles and executes the SQL query supplied as a parameter and returns a JSON response.

note

The query execution terminates automatically when the socket connection is closed.

Overview

Parameters

/exec is expecting an HTTP GET request with following query parameters:

ParameterRequiredDefaultDescription
countNofalsetrue or false. Counts the number of rows and returns this value.
limitNoAllows limiting the number of rows to return. limit=10 will return the first 10 rows (equivalent to limit=1,10), limit=10,20 will return row numbers 10 through to 20 inclusive.
nmNofalsetrue or false. Skips the metadata section of the response when set to true.
queryYesURL encoded query text. It can be multi-line.
timingsNofalsetrue or false. When set to true, QuestDB will also include a timings property in the response which gives details about the execution times.
explainNofalsetrue or false. When set to true, QuestDB will also include an explain property in the response which gives details about the execution plan.
quoteLargeNumNofalsetrue or false. When set to true, QuestDB will surround LONG type numbers with double quotation marks that will make them parsed as strings.

The parameters must be URL encoded.

Headers

Supported HTTP headers:

HeaderRequiredDescription
Statement-TimeoutNoQuery timeout in milliseconds, overrides default timeout from server.conf

Examples

SELECT query example:

curl -G \
--data-urlencode "query=SELECT timestamp, tempF FROM weather LIMIT 2;" \
--data-urlencode "count=true" \
http://localhost:9000/exec

A HTTP status code of 200 is returned with the following response body:

{
"query": "SELECT timestamp, tempF FROM weather LIMIT 2;",
"columns": [
{
"name": "timestamp",
"type": "TIMESTAMP"
},
{
"name": "tempF",
"type": "INT"
}
],
"timestamp": 0
"dataset": [
["2010-01-01T00:00:00.000000Z", 34],
["2010-01-01T00:51:00.000000Z", 34]
],
"count": 2
}

SELECT query returns response in the following format:

{
"query": string,
"columns": Array<{ "name": string, "type": string }>
"dataset": Array<Array<Value for Column1, Value for Column2>>,
"timestamp": number,
"count": Optional<number>,
"timings": Optional<{ compiler: number, count: number, execute: number }>,
"explain": Optional<{ jitCompiled: boolean }>
}

You can find the exact list of column types in the dedicated page.

The timestamp field indicates which of the columns in the result set is the designated timestamp, or -1 if there isn't one.

UPDATE query example:

This request executes an update of table weather setting 2 minutes query timeout

curl -G \
-H "Statement-Timeout: 120000" \
--data-urlencode "query=UPDATE weather SET tempF = tempF + 0.12 WHERE tempF > 60" \
http://localhost:9000/exec

A HTTP status code of 200 is returned with the following response body:

{
"ddl": "OK",
"updated": 34
}

CREATE TABLE query example:

This request creates a basic table, with a designated timestamp.

curl -G \
-H "Statement-Timeout: 120000" \
--data-urlencode "query=CREATE TABLE foo ( a INT, ts TIMESTAMP) timestamp(ts)" \
http://localhost:9000/exec

A HTTP status code of 200 is returned with the following response body:

{
"ddl": "OK"
}

/exp - Export data

This endpoint allows you to pass url-encoded queries but the request body is returned in a tabular form to be saved and reused as opposed to JSON.

Overview

/exp is expecting an HTTP GET request with following parameters:

ParameterRequiredDescription
queryYesURL encoded query text. It can be multi-line.
limitNoPaging opp parameter. For example, limit=10,20 will return row numbers 10 through to 20 inclusive and limit=20 will return first 20 rows, which is equivalent to limit=0,20. limit=-20 will return the last 20 rows.
nmNotrue or false. Skips the metadata section of the response when set to true.

The parameters must be URL encoded.

Examples

Considering the query:

curl -G \
--data-urlencode "query=SELECT AccidentIndex2, Date, Time FROM 'Accidents0514.csv'" \
--data-urlencode "limit=5" \
http://localhost:9000/exp

A HTTP status code of 200 is returned with the following response body:

"AccidentIndex","Date","Time"
200501BS00001,"2005-01-04T00:00:00.000Z",17:42
200501BS00002,"2005-01-05T00:00:00.000Z",17:36
200501BS00003,"2005-01-06T00:00:00.000Z",00:15
200501BS00004,"2005-01-07T00:00:00.000Z",10:35
200501BS00005,"2005-01-10T00:00:00.000Z",21:13

Error responses

Malformed queries

A successful call to /exec or /exp which also contains a malformed query will return response bodies with the following format:

{
"query": string,
"error": string,
"position": number
}

The position field is the character number from the beginning of the string where the error was found.

Considering the query:

curl -G \
--data-urlencode "query=SELECT * FROM table;" \
http://localhost:9000/exp

A HTTP status code of 400 is returned with the following response body:

{
"query": "SELECT * FROM table;",
"error": "function, literal or constant is expected",
"position": 8
}

Authentication (RBAC)

note

Role-based Access Control (RBAC) is available in QuestDB Enterprise. See the next paragraph for authentication in QuestDB Open Source.

REST API supports two authentication types:

  • HTTP basic authentication
  • Token-based authentication

The first authentication type is mainly supported by web browsers. But you can also apply user credentials programmatically in a Authorization: Basic header. This example curl command that executes a SELECT 1; query along with the Authorization: Basic header:

curl -G --data-urlencode "query=SELECT 1;" \
-u "my_user:my_password" \
http://localhost:9000/exec

The second authentication type requires a REST API token to be specified in a Authorization: Bearer header:

curl -G --data-urlencode "query=SELECT 1;" \
-H "Authorization: Bearer qt1cNK6s2t79f76GmTBN9k7XTWm5wwOtF7C0UBxiHGPn44" \
http://localhost:9000/exec

Refer to the user management page to learn more on how to generate a REST API token.

Authentication in QuestDB open source

QuestDB Open Source supports HTTP basic authentication. To enable it, set the configuration options http.user and http.password in server.conf.

The following example shows how to enable HTTP basic authentication in QuestDB open source:

http.user=my_user
http.password=my_password

Then this curl command executes a SELECT 1; query:

curl -G --data-urlencode "query=SELECT 1;" \
-u "my_user:my_password" \
http://localhost:9000/exec