The objective of this benchmark is to assess performance of these two databases under conditions mirroring real world scenarios. Our approach involves running both databases on identical hardware configurations, ingesting identical dataset and running the same set of queries. For this reason we opted for the NYC Taxi Dataset.
The NYC Taxi Dataset is a massive dataset of over 1 Billion trips made by NYC Taxis and Uber, all originating in New York City since 2009.
The queries used were as follows:
SigLens is a column oriented database built from scratch for observability data. It offers dynamic compressions over each column requiring zero configuration.
SigLens uses columnar micro indices that are 1/100th size of an actual database index. This reduces the number of machines needed. SigLens uses AgileAggregationTree algorithm that makes aggregations queries run at sub-second speeds.
Setup the server, mount the storage, configure aws s3 cli by following these steps:
Step 1: Let’s find out if the NVM storage device is present on this server. To find that, ssh into your server and run lsblk. You should see something like the following, with the nvme1n1 item having 3.4 TB of storage.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 21.3M 1 loop /snap/amazon-ssm-agent/7529
loop1 7:1 0 49.1M 1 loop /snap/core18/2794
loop2 7:2 0 59.3M 1 loop /snap/core20/2019
loop3 7:3 0 109.6M 1 loop /snap/lxd/24326
loop4 7:4 0 35.5M 1 loop /snap/snapd/20102
nvme0n1 259:0 0 8G 0 disk
├─nvme0n1p1 259:2 0 7.9G 0 part /
└─nvme0n1p15 259:3 0 99M 0 part /boot/efi
nvme1n1 259:1 0 3.4T 0 disk
Step 2: Let’s mount the storage device and create directories.
sudo mkfs.xfs /dev/nvme1n1
sudo mkdir /mnt/nvme1n1
sudo mount /dev/nvme1n1 /mnt/nvme1n1
Step 3: Verify Storage
You can check that it's mounted by running df -h and you should see something like this:
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.6G 1.5G 6.2G 20% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 6.2G 948K 6.2G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/nvme0n1p15 98M 6.3M 92M 7% /boot/efi
tmpfs 3.1G 4.0K 3.1G 1% /run/user/1000
/dev/nvme1n1 3.5T 25G 3.4T 1% /mnt/nvme1n1
Step 4: Update Permissions
sudo chmod 777 /mnt/nvme1n1
Step 5: Configure AWS CLI
Note: This AWS config step is only needed if you are using a private S3 bucket, if you use our public S3 bucket you can skip this step.
sudo apt-get install awscli -y
aws configure
Note: You can skip this step and use our s3 bucket. It has the dataset in the correct format. If you want to prepare the dataset yourself then follow these steps else jump directly to the benchmark step to use our s3 bucket.
mkdir -p /mnt/nvme1n1/data
cd /mnt/nvme1n1/data
git clone git@github.com:siglens/siglens
python -m venv taxis
source taxis/bin/activate
pip install -r siglens/tools/nyc-taxi-benchmark/requirements.txt
python siglens/tools/nyc-taxi-benchmark/parquet_to_tsv.py . .
for year in {2011..2017}; do
for month in {01..12}; do
basefile="yellow_tripdata_$year-$month"
go run siglens/tools/sigclient/cmd/utils/converter.go --input
"$basefile.tsv" --output "$basefile.json" &
done
done
wait
# Prepare to install ClickHouse
sudo apt-get install -y apt-transport-https ca-certificates dirmngr
GNUPGHOME=$(mktemp -d)
sudo GNUPGHOME="$GNUPGHOME" gpg --no-default-keyring
--keyring /usr/share/keyrings/clickhouse-keyring.gpg --keyserver
hkp://keyserver.ubuntu.com:80 --recv-keys 8919F6BD2B48D754
sudo rm -r "$GNUPGHOME"
sudo chmod +r /usr/share/keyrings/clickhouse-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb stable main" | sudo tee \
/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
# Install ClickHouse server and client
sudo apt-get install -y clickhouse-server clickhouse-client
sudo service clickhouse-server start
clickhouse-client
CREATE TABLE trips (
VendorID Int32,
tpep_pickup_datetime DateTime,
tpep_dropoff_datetime DateTime,
passenger_count Int32,
trip_distance Float32,
RatecodeID Int32,
store_and_fwd_flag FixedString(1),
PULocationID Int32,
DOLocationID Int32,
payment_type FixedString(3),
fare_amount Float32,
extra Float32,
mta_tax Float32,
tip_amount Float32,
tolls_amount Float32,
improvement_surcharge Float32,
total_amount Float32,
congestion_surcharge Float32,
airport_fee Float32)
ENGINE = MergeTree()
ORDER BY (tpep_pickup_datetime)
SETTINGS index_granularity=8192
INSERT INTO trips
SELECT
VendorID,
tpep_pickup_datetime,
tpep_dropoff_datetime,
passenger_count,
trip_distance,
RatecodeID,
store_and_fwd_flag,
PULocationID,
DOLocationID,
payment_type,
fare_amount,
extra,
mta_tax,
tip_amount,
tolls_amount,
improvement_surcharge,
total_amount,
congestion_surcharge,
airport_fee
FROM s3(
's3://siglens-benchmark-datasets/nyc-taxi-benchmark-data/tsv/yellow_tripdata_{2011..2017}-{01..12}.tsv.gz',
'your_aws_access_key_id',
'your_aws_secret_access_key',
'TabSeparatedWithNames'
);
# Query 1
SELECT airport_fee, count(*) FROM trips GROUP BY airport_fee
# Query 2
SELECT passenger_count, avg(total_amount) FROM trips GROUP BY passenger_count
# Query 3
SELECT passenger_count, PULocationID, count(*) FROM trips GROUP BY passenger_count, PULocationID
# Query 4
SELECT passenger_count, PULocationID, trip_distance, count(*)
FROM trips
GROUP BY passenger_count, PULocationID, trip_distance
Try 1 | Try 2 | Try 3 | Try 4 | Try 5 | Lowest | |
Q1 | 357 ms | 346 ms | 349 ms | 417 ms | 358 ms | 346 ms |
Q2 | 2,642 ms | 2,389 ms | 2,427 ms | 2,597 ms | 2,349 ms | 2,349 ms |
Q3 | 2,675 ms | 2,108 ms | 2,112 ms | 2,294 ms | 2,098 ms | 2,098 ms |
Q4 | 8,240 ms | 7,850 ms | 8,211 ms | 7,850 ms | 7,807 ms | 7,807 ms |
sudo apt update
sudo apt install golang -y
git clone https://github.com/siglens/siglens.git
cd siglens
## Start SigLens
sudo go run cmd/siglens/main.go --config server.yaml
INFO[2023-12-06 18:10:38] Extracting config from configFile: server.yaml
INFO[2023-12-06 18:10:38] Defaulting to 2160hrs (90 days) of retention...
INFO[2023-12-06 18:10:38] ----- Siglens server type SingleNode starting up -----
INFO[2023-12-06 18:10:38] ----- Siglens Ingestion server starting on 0.0.0.0:8081 -----
INFO[2023-12-06 18:10:38] ----- Siglens Query server starting on 0.0.0.0:5122 -----
INFO[2023-12-06 18:10:38] ----- Siglens UI starting on 0.0.0.0:5122 -----
curl -X POST -d '{
"tableName": "trips",
"groupByColumns": ["airport_fee", "passenger_count", "PULocationID", "trip_distance"],
"measureColumns": ["total_amount"]
}' http://localhost:5122/api/pqs/aggs
echo ""
{"message":"All OK","status":200}
cd /mnt/nvme1n1/siglens/tools/sigclient
mkdir dataset
for year in {2011..2017}; do
for month in {01..12}; do
{
basefile="yellow_tripdata_$year-$month"
aws s3 cp s3://siglens-benchmark-datasets/nyc-taxi-benchmark-data/json/$basefile.json.gz dataset/
gunzip dataset/$basefile.json.gz
python3 ../nyc-taxi-benchmark/ingester.py dataset/$basefile.json
} &
done
wait
done
curl -X POST -d '{
"searchText": "SELECT airport_fee, count(*) FROM trips GROUP BY airport_fee",
"indexName": "trips",
"startEpoch": "now-24h",
"endEpoch": "now",
"queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool
curl -X POST -d '{
"searchText": "SELECT passenger_count, avg(total_amount) FROM trips GROUP BY passenger_count",
"indexName": "trips",
"startEpoch": "now-24h",
"endEpoch": "now",
"queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool
curl -X POST -d '{
"searchText": "SELECT passenger_count, PULocationID, count(*) FROM trips GROUP BY passenger_count, PULocationID",
"indexName": "trips",
"startEpoch": "now-24h",
"endEpoch": "now",
"queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool
curl -X POST -d '{
"searchText": "SELECT passenger_count, PULocationID, trip_distance, count(*) FROM trips GROUP BY passenger_count, PULocationID, trip_distance",
"indexName": "trips",
"startEpoch": "now-24h",
"endEpoch": "now",
"queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool
Try 1 | Try 2 | Try 3 | Try 4 | Try 5 | Lowest | |
Q1 | 60 ms | 72 ms | 63 ms | 41 ms | 42 ms | 41 ms |
Q2 | 43 ms | 43 ms | 44 ms | 43 ms | 43 ms | 43 ms |
Q3 | 116 ms | 304 ms | 442 ms | 509 ms | 471 ms | 116 ms |
Q4 | 9,815 ms | 9,229 ms | 10,316 ms | 9,093 ms | 9,092 ms | 9,092 ms |
Query Type | ClickHouse | SigLens | Comparison |
---|---|---|---|
Q1 | 346 ms | 41 ms | SigLens 8x faster than ClickHouse |
Q2 | 2,349 ms | 43 ms | SigLens 54x faster than ClickHouse |
Q3 | 2,098 ms | 166 ms | SigLens 18x faster than ClickHouse |
Q4 | 7,807 ms | 9,092 ms | ClickHouse 1.2x faster than SigLens |