Case study
Energetech and QuestDB partner for efficient market data management
Energetech uses QuestDB to manage time-series data for commodity prices and forecasts, enabling dynamic pricing and efficient energy distribution.
- High ingestion performance
- Handles bursts of data while ingesting 140GB of data per day.
- Data deduplication
- Simplifies ingestion pipelines with built-in deduplication on ingestion.
- Efficient querying
- Performs ad-hoc aggregations and 'latest by' queries on billions of rows.
Handling vast energy data
Market data for energy and commodities, managed with precision
Energetech stores two main types of time-series data: market data (prices, order book snapshots) and forecasts. They ingest transaction and order book data from various venues and perform 'latest by' or aggregating queries for downstream applications. <br><br>With multiple external forecast providers publishing data several times a day, Energetech requires efficient data deduplication and high ingestion performance to handle short bursts of high-volume data.
- Market data
- QuestDB enables Energetech to process and aggregate financial market data instantly.
QuestDB is a specialized database that excels in its focus area. It’s easy to get started and addresses many pain points with time series data, offering outstanding read and write performance.
Cost-saving data architecture
Energetech's data pipeline
Initially we stored all data in a mainstream NoSQL database but it fell short trying to support our aggregations. Plus we had to add a lot of indexes to the tables which increased space usage significantly and trying to deduplicate data on ingestion slowed the performance.
WITH latest_data AS (SELECTmeta_id,published_at_utcFROM/table/WHEREmeta_id IN /list of 100-500 meta_ids/AND published_at_utc < '/cutoff for data/'LATEST ON published_at_utc PARTITION BY meta_id), period_remit AS (SELECTmeta_id,published_at_utc,event_start_utc,event_end_utc,value AS availabilityFROM/table/WHEREevent_start_utc < '/period_start/'AND event_end_utc > '/period_end/')SELECTperiod_remit.meta_id,period_remit.published_at_utc,period_remit.event_start_utc,period_remit.event_end_utc,period_remit.availability AS availabilityFROMperiod_remitJOIN latest_data ON (meta_id, published_at_utc);
Industry leading ingestion capabilities
Processing massive data volumes with ease
Energetech's pipeline ingests up to 140GB of data daily, achieving peak ingestion rates of 5.69 million messages per minute. Thanks to QuestDB's powerful deduplication and compression, database growth remains minimal at around 1% per week.
When you combine both massive performance and extreme hardware efficiency, the end result is significant cost and time savings for your team. Both in systems costs and total maintenance and development costs over time.
High performing teams need the highest performing tools.
- Data ingested daily
- 140GB
- Peak messages per minute
- 5,690,000
- Peak data throughput
- 44MB/s
- Weekly database growth
- +1.00%
Simple queries for QuestDB took up to 10 times as long in a Postgres-based timeseries database. Other issues we faced were ineffective compression and slow insertion speeds.
The next generation has arrived
Upgrade to QuestDB
Hyper ingestion, millisecond queries, and powerful SQL.
Lower bills through peak efficiency.