QuestDB is an open source columnar database that specializes in time series. It offers category-leading ingestion throughput and fast SQL queries with operational simplicity. QuestDB helps reduce operational costs and overcome ingestion bottle necks, and can greatly simplify overall ingress infrastructure. With a broad array of official support for ingestion protocols like InfluxDB Line Protocol & PostgreSQL Wire Protocol, third-party tools and language clients, it is quick to start.

This introduction provides a brief overview on:

Just want to build? Jump to the Quick start guide.

Top QuestDB features#

QuestDB is applied within cutting edge use cases around the world.

Developers are most enthusiastic about the following key features:

  1. Massive ingestion handling: With only 4 threads, QuestDB clocks at just under 1M rows per second. If you are running into ingestion speed and throughput bottlenecks using an existing storage engine or time series database, QuestDB can help. For perspective, we've even seen tremendous throughput on a Raspberry Pi.
  2. Familiar SQL analytics: No obscure domain-specific languages required. Use SQL and query your data using your favourite PostgreSQL-compatible library.
  3. High performance deduplication & out-of-order indexing with near limitless cardinality: Essential when handling massive, bursting data streams. Official support means that time series and event use cases lose significant complexity. Lots of unique values? High cardinality will not lead to performance degradation.
  4. Time series SQL extensions: Fast, SIMD-optimized SQL extensions to cruise through querying and analysis. Greatest hits include:
    • SAMPLE BY summarizes data into chunks based on a specified time interval, from a year to a microsecond
    • WHERE IN to compress time ranges into concise intervals
    • LATEST ON for latest values within multiple series within a table
    • ASOF JOIN to associate timestamps between a series based on proximity; no extra indices required

Benefits of QuestDB#

Time series data is seen increasingly in use cases across finance, internet of things, e-commerce, security, blockchain, and many emerging industries. As more and more time bound data is generated by an increasing number of clients, having high performance storage at the receiving end of your servers, devices or queues prevents ingestion bottlenecks, simplifies code and reduces costly infrastructure sprawl.

Performance is the key, but it's much more than performance. Let's look at the life of a single database instance. If it can ingest over a million rows per second using 4 CPU cores & 32GB RAM, then what changes? How is your infrastructure impacted? What about cost? And the complexity of your code?

A chart showing high-cardinality ingestion performance of InfluxDB, TimescaleDB, and QuestDB
Benchmark results for QuestDB 7.0, InfluxDB 1.8 and Timescale 2.10

The right storage engine for intensive workloads simplifies your overall "application space" and keeps cost and infrastructure sprawl under control. When using any of the seven official QuestDB client libraries or integrations, you don't need to worry about out-of-order data, duplicates, exactly one semantics, frequency of ingestion, or many other details you will find in real-time streaming scenarios. It's simplified, hyper-fast data ingestion with tremendous efficiency and value.

When data is stored and in time sequence, querying and visualizing is the next step. Queries fuel your dashboards, exchanges, sensors, rockets, applications, and so on. Integrations based on these queries also pipe data into visualization tools like Grafana. Writing query syntax should not require the additional complexity of a domain-specific language or cumbersome additional steps for connecting to third party tools.

For QuestDB, apply SQL:

Navigate time with SQL
SELECT timestamp, sensorName, tempC
FROM sensors LATEST ON timestamp
PARTITION BY sensorName;

Blast data from multiple sources without concern for cardinality, handle it without making a mess of your table schemas, organize it via time and provide simple, cost-effective and accessible SQL querying. That's what we're offering with QuestDB. But the story is one thing.

The best way to see whether QuestDB is right for you is to try it out.

Three flavours of QuestDB#

QuestDB is built to run where you need it.

The right one depends on your team and use case.

Open source#

QuestDB is open source under the Apache 2.0 license. The open source version contains the core product, and for many it is ideal for either a quick proof of concept or a production asset. If you are looking for a strong general purpose columnar database, experiencing an ingestion speed bottle neck or runaway infrastructure costs and want to improve your event or time series data handling, then the open source version is a great place to start.


QuestDB Enterprise offers everything from open source, plus additional features for running QuestDB at larger scale or greater significance. Features within Enterprise include high availability, role based access control, TLS on all protocols, data compression, cold storage and priority support.

Typically, when growing to multiple instances or to mission critical deployments, Enterprise provides an additional layer of official operational tooling with the added benefit of world-class QuestDB support. Enterprise increases the reliability of the already solid open source deployments, while providing better value for compute spend vs. existing engines and methods.

For a breakdown of Enterprise features, see the QuestDB Enterprise page.


QuestDB Cloud is the most efficient way to get started with QuestDB. The expert QuestDB team takes care of database operation for you. All QuestDB Cloud deployments run QuestDB Enterprise, meaning that features like compression and high availability are tuned and provided for you.

While many customers, especially those running in highly sensitive contexts such as finance, medicine, rocketry and so on, do prefer to operate storage engines in-house, QuestDB Cloud remains an effective option for those who want a managed storage solution for their high throughput use cases.

Where to next?#

First, the quick start guide will get you running.

After that, the following will weave the right protocols, clients or third party tools together to build your ideal high performance ingestion pipeline. You'll be inserting data and generating valuable queries in little time.

  • Connect to the database through our various endpoints. Learn which protocol is best for different use cases
  • Insert data using the InfluxDB Line Protocol, PostgreSQL wire protocol or our HTTP REST API
  • Query data with SQL queries via PostgreSQL Wire Protocol or exported to JSON or CSV via our HTTP REST API
  • Web Console for quick SQL queries, charting and CSV upload/export functionality


We are happy to help with any question you may have.

The team loves a good performance optimization challenge!

Feel free to reach out using the following channels:

โญ Something missing? Page not helpful? Please suggest an edit on GitHub.