FAQ

The following document contains common hardware and software configuration issues met when running QuestDB, as well as solutions to them. If you cannot find the answers to your question, please join our Slack Community and post your questions there.

Where do I find the log and how do I filter the log messages?#

Log files are stored in the log folder under the root_directory. The log has the following levels to assist filtering:

Check the log page for the available log levels.

How do I delete a row?#

See our guide on modifying data.

How do I convert a STRING column to a SYMBOL or vice versa?#

The SQL UPDATE keyword can be used to change the data type of a column. The same approach can also be used to increase the capacity of a SYMBOL column that is undersized.

note

Perform a database backup prior to converting data.

The steps are as follows:

  1. Backup your database.
  2. Add a new column to the table and define the desired data type.
  3. Stop data ingestion and increase SQL query timeout, query.timeout.sec, as UPDATE may take a while to complete. Depending on the size of the column, we recommend to increase the value significantly: the default is 60 seconds and it may be reasonable to increase it to one hour. Restart the instance after changing the configuration, to activate the change.
  4. Use UPDATE to copy the existing column content to the new column. Now, the column has the correct content with the new data type.
  5. Delete the old column.
  6. Rename the new column accordingly. For example, to change old_col from STRING to SYMBOL for table my_table:
ALTER TABLE my_table ADD COLUMN new_col SYMBOL;
UPDATE my_table SET new_col = old_col;
ALTER TABLE my_table DROP COLUMN old_col;
ALTER TABLE my_table RENAME COLUMN new_col TO old_col;

Why do I get table busy error messages when inserting data over PostgreSQL wire protocol?#

You may get table busy [reason=insert] or similar errors when running INSERT statements concurrently on the same table. This means that the table is locked by inserts issued from another SQL connection or other client protocols for data import, like InfluxDB Line Protocol over TCP or CSV over HTTP.

To avoid this error, we recommend using WAL tables to allow concurrent ingestion across all interfaces.

Why do I see could not open read-write messages when creating a table or inserting rows?#

Log messages may appear like the following:

2022-02-01T13:40:11.336011Z I i.q.c.l.t.LineTcpMeasurementScheduler could not create table [tableName=cpu, ex=could not open read-write
...
io.questdb.cairo.CairoException: [24] could not open read-only [file=/root/.questdb/db/cpu/service.k]

The machine may have insufficient limits for the maximum number of open files. Try checking the ulimit value on your machine. Refer to capacity planning page for more details.

Why do I see errno=12 mmap messages in the server logs?#

Log messages may appear like the following:

2022-02-01T13:40:10.636014Z E i.q.c.l.t.LineTcpConnectionContext [8655] could not process line data [table=test_table, msg=could not mmap [size=248, offset=0, fd=1766, memUsed=314809894008, fileLen=8192], errno=12]

The machine may have insufficient limits of memory map areas a process may have. Try checking and increasing the vm.max_map_count value on your machine. Refer to capacity planning page for more details.

Why do I see async command/event queue buffer overflow messages when dropping partitions?#

It could be the case that there are a lot of partitions to be dropped by the DROP PARTITION statement you're trying to run, so the internal queue used by the server cannot fit them. Try to increase cairo.writer.command.queue.slot.size value. Its default value is 2K, i.e. 2KB, so you may need to set it to a larger value, e.g. 32K.

How do I avoid duplicate rows with identical fields?#

We have an open feature request to optionally de-duplicate rows inserted with identical fields. Until then, you need to modify the data after it's inserted and use a GROUP BY query to identify duplicates.

Can I query by time?#

Yes! When using the WHERE statement to define the time range for a query, the IN keyword allows modifying the range and interval of the search. The range can be tuned to a second resolution.

For example, the following query search for daily records between 9:15 to 16:00 inclusively from Jan 1 2000 for 365 days. The original timestamp, 2000-01-01T09:15, is extended for 405 minutes to cover the range. This range is repeated every day for 365 times:

SELECT timestamp, col1
FROM 'table1'
WHERE timestamp IN '2000-01-01T09:15;405m;1d;365';

My time or timezone is incorrect#

If you are using a PostgreSQL client, such as the NodeJS client, then you may see a mismatch in your timestamps. The documentation from the PG library has this written in the data types page:

node-postgres will convert instances of JavaScript date objects into the expected input value for your PostgreSQL server. Likewise, when reading a date, timestamp, or timestamptz column value back into JavaScript, node-postgres will parse the value into an instance of a JavaScript Date object.

And then most importantly:

... node-postgres converts DATE and TIMESTAMP columns into the local time of the node process set at process.env.TZ.

Therefore, in your .env, you may need to add:

TZ=UTC

The author suggests using TIMESTAMPTZ. When a TIMESTAMPTZ value is stored, PostgreSQL converts the timestamp to UTC. When the value is retrieved, PostgreSQL converts it back from UTC to the time zone set in the client's system. This ensures that the timestamp is correctly interpreted, no matter what time zone the client is in.

On the other hand, TIMESTAMP (timestamp without time zone) does not store any time zone data. It simply stores a date and time. If a client in a different time zone retrieves a TIMESTAMP value, they might interpret it as being in their local time zone, which could lead to incorrect results.

So, if your application needs to handle time data across different time zones, TIMESTAMPTZ is usually the better choice. It ensures that timestamps are correctly interpreted, no matter where your users are located.

Why do I see cannot delete file, will retry in the logs?#

If you are using Windows, we recommend disabling Windows Defender. Windows Defender will scan each new file that it sees. If a file is being scanned, it cannot be deleted. Therefore, QuestDB creates files, Windows Defender scans them, QuestDB goes to clean up and cannot delete the files being scanned.

How can I fix DatabaseError: unexpected token: CURSOR? This query works in PostgreSQL!#

QuestDB does not support scrollable cursors that require explicit creation and management through DECLARE CURSOR and subsequent operations like FETCH. Instead, QuestDB supports non-scrollable, or "forward-only", cursors. This distinction means that while you can iterate over query results sequentially, you cannot navigate backwards or access result positions as you might with scrollable cursors in PostgreSQL.

For more information and for tips to work around, see the PostgreSQL compatability seciton in our Query & SQL overview.


โญ Something missing? Page not helpful? Please suggest an edit on GitHub.