FAQ
The following document contains common hardware and software configuration issues met when running QuestDB, as well as solutions to them.
#
Why is ILP data not immediately available?InfluxDB line protocol (ILP) does not commit data on single lines or when the
sender disconnects, but instead uses a number of rules to break incoming data
into commit batches. This results in data not being visible in SELECT
queries
immediately after being received. Refer to
InfluxDB line protocol
guide to understand these rules.
#
How do I update or delete a row?See our guide on modifying data.
table busy
error messages when inserting data over PostgreSQL wire protocol?#
Why do I get You may get table busy [reason=insert]
or similar errors when running INSERT
statements concurrently on the same table. This means that the table is locked
by inserts issued from another SQL connection or other client protocols for data
import, like ILP over TCP or CSV over HTTP. To reduce the chances of getting
this error, try using auto-commit to keep the transaction as short as possible.
We're also considering adding automatic insert retries on the database side, but for now, it is safe to handle this error on the client side and retry the insert.
could not open read-write
messages when creating a table or inserting rows?#
Why do I see Log messages may appear like the following:
The machine may have insufficient limits for the maximum number of open files.
Try checking the ulimit
value on your machine. Refer to
capacity planning page
for more details.
errno=12
mmap messages in the server logs?#
Why do I see Log messages may appear like the following:
The machine may have insufficient limits of memory map areas a process may have.
Try checking and increasing the vm.max_map_count
value on your machine. Refer
to
capacity planning
page for more details.
#
How do I avoid duplicate rows with identical fields?We have an open
feature request to optionally de-duplicate rows
inserted with identical fields. Until then, you need to
modify the data after it's inserted and use a
GROUP BY
query to identify duplicates.