






VictoriaMetrics
VictoriaMetrics is fast, cost-effective and scalable time-series database. It can be used as long-term remote storage for Prometheus.
It is available in binary releases,
docker images and
in source code. Just download VictoriaMetrics and see how to start it.
Cluster version is available here.
Case studies and talks
Prominent features
- Supports Prometheus querying API, so it can be used as Prometheus drop-in replacement in Grafana.
VictoriaMetrics implements MetricsQL query language, which is inspired by PromQL.
- Supports global query view. Multiple Prometheus instances may write data into VictoriaMetrics. Later this data may be used in a single query.
- High performance and good scalability for both inserts
and selects.
Outperforms InfluxDB and TimescaleDB by up to 20x.
- Uses 10x less RAM than InfluxDB when working with millions of unique time series (aka high cardinality).
- Optimized for time series with high churn rate. Think about prometheus-operator metrics from frequent deployments in Kubernetes.
- High data compression, so up to 70x more data points
may be crammed into limited storage comparing to TimescaleDB.
- Optimized for storage with high-latency IO and low IOPS (HDD and network storage in AWS, Google Cloud, Microsoft Azure, etc). See graphs from these benchmarks.
- A single-node VictoriaMetrics may substitute moderately sized clusters built with competing solutions such as Thanos, M3DB, Cortex, InfluxDB or TimescaleDB.
See vertical scalability benchmarks,
comparing Thanos to VictoriaMetrics cluster
and Remote Write Storage Wars talk
from PromCon 2019.
- Easy operation:
- VictoriaMetrics consists of a single small executable without external dependencies.
- All the configuration is done via explicit command-line flags with reasonable defaults.
- All the data is stored in a single directory pointed by
-storageDataPath
flag.
- Easy and fast backups from instant snapshots
to S3 or GCS with vmbackup / vmrestore.
See this article for more details.
- Storage is protected from corruption on unclean shutdown (i.e. OOM, hardware reset or
kill -9
) thanks to the storage architecture.
- Supports metrics' ingestion and backfilling via the following protocols:
- Ideally works with big amounts of time series data from Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various Enterprise workloads.
- Has open source cluster version.
Operation
Table of contents
How to start VictoriaMetrics
Just start VictoriaMetrics executable
or docker image with the desired command-line flags.
The following command-line flags are used the most:
-storageDataPath
- path to data directory. VictoriaMetrics stores all the data in this directory. Default path is victoria-metrics-data
in current working directory.
-retentionPeriod
- retention period in months for the data. Older data is automatically deleted. Default period is 1 month.
-httpListenAddr
- TCP address to listen to for http requests. By default, it listens port 8428
on all the network interfaces.
-graphiteListenAddr
- TCP and UDP address to listen to for Graphite data. By default, it is disabled.
-opentsdbListenAddr
- TCP and UDP address to listen to for OpenTSDB data over telnet protocol. By default, it is disabled.
-opentsdbHTTPListenAddr
- TCP address to listen to for HTTP OpenTSDB data over /api/put
. By default, it is disabled.
Pass -help
to see all the available flags with description and default values.
It is recommended setting up monitoring for VictoriaMetrics.
Prometheus setup
Prometheus must be configured with remote_write
in order to send data to VictoriaMetrics. Add the following lines
to Prometheus config file (it is usually located at /etc/prometheus/prometheus.yml
):
remote_write:
- url: http://<victoriametrics-addr>:8428/api/v1/write
Substitute <victoriametrics-addr>
with the hostname or IP address of VictoriaMetrics.
Then apply the new config via the following command:
kill -HUP `pidof prometheus`
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel.
This means the data remains available in local storage for --storage.tsdb.retention.time
duration
even if remote storage is unavailable.
If you plan to send data to VictoriaMetrics from multiple Prometheus instances, then add the following lines into global
section
of Prometheus config:
global:
external_labels:
datacenter: dc-123
This instructs Prometheus to add datacenter=dc-123
label to each time series sent to remote storage.
The label name may be arbitrary - datacenter
is just an example. The label value must be unique
across Prometheus instances, so those time series may be filtered and grouped by this label.
For highly loaded Prometheus instances (400k+ samples per second)
the following tuning may be applied:
remote_write:
- url: http://<victoriametrics-addr>:8428/api/v1/write
queue_config:
max_samples_per_send: 10000
capacity: 20000
max_shards: 30
Using remote write increases memory usage for Prometheus up to ~25%
and depends on the shape of data. If you are experiencing issues with
too high memory consumption try to lower max_samples_per_send
and capacity
params (keep in mind that these two params are tightly connected).
Read more about tuning remote write for Prometheus here.
It is recommended upgrading Prometheus to v2.12.0 or newer,
since the previous versions may have issues with remote_write
.
Grafana setup
Create Prometheus datasource in Grafana with the following Url:
http://<victoriametrics-addr>:8428
Substitute <victoriametrics-addr>
with the hostname or IP address of VictoriaMetrics.
Then build graphs with the created datasource using Prometheus query language.
VictoriaMetrics supports native PromQL and extends it with useful features.
How to upgrade VictoriaMetrics?
It is safe upgrading VictoriaMetrics to new versions unless release notes
say otherwise. It is recommended performing regular upgrades to the latest version,
since it may contain important bug fixes, performance optimizations or new features.
Follow the following steps during the upgrade:
- Send
SIGINT
signal to VictoriaMetrics process in order to gracefully stop it.
- Wait until the process stops. This can take a few seconds.
- Start the upgraded VictoriaMetrics.
Prometheus doesn't drop data during VictoriaMetrics restart.
See this article for details.
How to apply new config to VictoriaMetrics?
VictoriaMetrics must be restarted for applying new config:
- Send
SIGINT
signal to VictoriaMetrics process in order to gracefully stop it.
- Wait until the process stops. This can take a few seconds.
- Start VictoriaMetrics with the new config.
Prometheus doesn't drop data during VictoriaMetrics restart.
See this article for details.
Just use http://<victoriametric-addr>:8428
url instead of InfluxDB url in agents' configs.
For instance, put the following lines into Telegraf
config, so it sends data to VictoriaMetrics instead of InfluxDB:
urls = ["http://<victoriametrics-addr>:8428"]
Do not forget substituting <victoriametrics-addr>
with the real address where VictoriaMetrics runs.
VictoriaMetrics maps Influx data using the following rules:
db
query arg is mapped into db
label value
unless db
tag exists in the Influx line.
- Field names are mapped to time series names prefixed with
{measurement}{separator}
value,
where {separator}
equals to _
by default. It can be changed with -influxMeasurementFieldSeparator
command-line flag.
See also -influxSkipSingleField
command-line flag. If {measurement}
is empty, then time series names correspond to field names.
- Field values are mapped to time series values.
- Tags are mapped to Prometheus labels as-is.
For example, the following Influx line:
foo,tag1=value1,tag2=value2 field1=12,field2=40
is converted into the following Prometheus data points:
foo_field1{tag1="value1", tag2="value2"} 12
foo_field2{tag1="value1", tag2="value2"} 40
Example for writing data with Influx line protocol
to local VictoriaMetrics using curl
:
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
An arbitrary number of lines delimited by '\n' may be sent in a single request.
After that the data may be read via /api/v1/export endpoint:
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
The /api/v1/export
endpoint should return the following response:
{"metric":{"__name__":"measurement_field1","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560272508147]}
{"metric":{"__name__":"measurement_field2","tag1":"value1","tag2":"value2"},"values":[1.23],"timestamps":[1560272508147]}
Note that Influx line protocol expects timestamps in nanoseconds by default,
while VictoriaMetrics stores them with milliseconds precision.
How to send data from Graphite-compatible agents such as StatsD?
- Enable Graphite receiver in VictoriaMetrics by setting
-graphiteListenAddr
command line flag. For instance,
the following command will enable Graphite receiver in VictoriaMetrics on TCP and UDP port 2003
:
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003
- Use the configured address in Graphite-compatible agents. For instance, set
graphiteHost
to the VictoriaMetrics host in StatsD
configs.
Example for writing data with Graphite plaintext protocol to local VictoriaMetrics using nc
:
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`"