InfluxDB time-series database

Monday, Mar 11, 2019 - Posted by Mihaela Pavicic

time-series database is, simply put, a database optimised for timestamped data, so data entries whose main identification entity is time (the “primary key”). The entries, or in this jargon the series, usually arrive at a relatively constant interval (not a requirement, though), and represent some kind of a metric whose data is to be stored, e.g. server metrics, performance data, sensor data, etc. Unlike the traditional (object-)relational databases, or some relatively recent flavours of the (in?)famous NoSQL databases, the time-series databases are not generic, and usually don’t have strong support for e.g. updating data or custom, query-driven joins. However, unlike these generic databases the time-series databases offer superior support for data aggregation, measuring changes over time, time-based queries that support scans over large number of records, and similar.

Although not as old as relational databases, the time-series databases are also not new, but were previously mainly used in the financial sector and heavily optimised for the realm of financial data; however, with the advent of IoT, and in general higher computing power of all possible types of sensors and devices, it has become a norm and expectation to store and analyse such a large number of metric data. These databases do not aim to replace the traditional relational databases, but only identify a niche in the sector of measuring sensor behaviour data, this being the majority of use-cases, and aim to provide tools that enable easier monitoring and analysis. While it is most certainly possible to build a schema within the relational database (or a NoSQL-one) that would store metric data, and build a layer of logic that would provide a framework for delivering answers to common questions (“how has this metric changed over the past seven days?”), it would be much more time consuming, impractical, and would result in drastically poorer performance when compared to something that is built solely for this purpose. The graph below shows the surge in popularity over the last two years (source:


InfluxDB is a cross-platform columnar database written in the Go programming language, and heavily optimised during its now five-year lifetime. InfluxDB is a database that started by using the traditional technologies, data structures and algorithms, but only to find out that those weren’t satisfactory for the high throughput and high data volume that these databases should work with, so heavy optimisations and improvements have been made over the years. For the moment, InfluxDB is offered for free for every kind of use (to be exact – they’re covered by the MIT license), although there is an enterprise version available that offers high-availability and clustering.

Tags and series

The data in InfluxDB is stored within a particular database, and within a particular measurement. Each database has its own settings, and its own memory and disk space, while the measurements are akin to tables in the classic relational jargon. The incoming data to be stored carries the measurement name, the tag set (a set of tag keys and tag values), the field set (a set of field keys and field values), and finally an optional timestamp – if the timestamp is omitted the current timestamp recorded at the time of arrival shall be used.

The relationship between all the described terms can be seen in the table below.


1535917400Zorg 17Europe27
1535917460Zorg 17Australia2
1535917520Vendetta Asia160
1535917520VendettaNorth America33
1535917580Zorg 17Australia4
1535917640Zorg 17Asia197

The table depicts an imaginary measurement game_players, where once within a minute we have recorded the current player count per game and region. In the upper table the game and region are tag keys, and their values are tag values. The player_countis a field key, while its values are field values. The difference is a subtle but an important one – the tags are indexed, while the fields aren’t, and the fields are mainly used to record a value that is later to be analysed, while tags are assisting in searching and filtering. Searching and filtering the player_count field, i.e. fields altogether, is possible, but since not indexed it will severely impede performance in any serious database.

The number of unique tag sets in the example above is six, because it is the number of unique combinations of all defined tag keys and tag values:

tag keys = {game, region}

tag = {tag key, tag value}

tag set = {{{game, Zorg17}, {region, Europe}}, {{game, Vendetta}, {region, Europe}}, {{game, Zorg17}, {region, Australia}}, {{game, Vendetta}, {region, Asia}}, {{game, Vendetta}, {region, North America}}, {{game, Zorg17}, {region, Asia}}}

The field set is defined in a similar manner:

field keys = {player_count}

field = {field key, field value}

The tag keys and values are always strings, as well as the field keys, while the field values – which represent the actual measured data, by definition – can be strings, integers, floats and booleans.


The InfluxDB’s vocabulary also defines retention policies and shard groups. The retention policy is a part of the data structure, unique per database, that defines how long is the data kept within the database. If there’s need for the data to be stored only for two hours upon their arrival, then a retention policy of two hours should be defined – the data older than that shall be dropped. One database may define many retention policies, and every retention policy consists of one or more shards, where the shard is an actual file on the disk. Shards of a single retention policy have a duration that is shorter than the duration of a retention policy, naturally; e.g. a retention policy that lasts for two days will probably have shards that last one hour – the data is partitioned for each hour, and every timestamp of incoming data goes into its own shard. Naturally, when such partitioning is present, one has to account for the queries that will be made; shards are combining series data logically, and in order, so if there are only queries that span within a single hour then a single shard/file will be scanned every time. However, if there’s need to regularly exercise queries that span over many hours then it may be prudent to define the shard’s duration to a longer time span, so such queries would still scan only a single shard, thus yielding better overall performance.


Finally, once we have these defined, a series can be defined, and consists of a measurement, tag set, and the retention policy.

series = {measurement, tag set, retention policy}

The series cardinality is something to watch out for because the series are loaded during runtime into memory in order to be able to quickly add and scan for entries, and will probably consume the majority of RAM on the machine where InfluxDB is running.

Every data entry is also denoted by the timestamp of configurable granularity – by default it’s nanoseconds – and it is always in UTC. Ultimately, a point is defined as a unique combination of a timestamp, series, and fields:

point = {timestamp, series, field set}

Practically this means that data duplication is not allowed, i.e. having two different metric values arriving at the same time(stamp), and for the same series (tag set), is not possible because the latter incoming data entry will simply overwrite the former one – a point is unique.

Influx query language

The team behind InfluxDB went to great length to conform to the existing standardised query language – SQL – and created something resembling SQL, but still not quite there, and are calling it InfluxQLInfluxQL, being very similar to SQL in both vocabulary and intent, offers standard commands for inserting, querying, and deleting data, namely the INSERT, SELECT, and DELETE commands whose behaviour is very similar to the relational counterpart, if not identical. It also features the CREATE command, a tool used to model a database schema – one can create databases, measurements, retention policies, as well as continuous queries, so to an SQL Jedi navigating through the schema and its data shouldn’t pose a problem at all.

InfluxQL supports the standard GROUP BY clause, which works in much the standard way, but has its expansion where needed. InfluxQL also supports a number of aggregation, time-related, and data analysis functions readily available for use, so it’s a great tool out of the box to work with timed metric data.

Continuous queries

Continuous queries are a rather useful piece of logic offered by the database. The database operator can create a continuous query by executing the CREATE CONTINUOUS QUERY query on a certain measurement, and within it define a SELECT-query to execute. The general use case for the continuous queries is to make a continuous query run at regular intervals (e.g. once every hour, or once per day) when it executes a SELECT-query that reads data from one measurement, aggregates it and transposes into a completely other set of data, and transfers that data (still within the same query) into another measurement and/or retention policy.


InfluxDB is a part of a larger eco-system of components and plugins intended for various kinds of data ingestion, processing, and alerting mechanisms. There are three tools readily available:

  • Telegraf – Contains a number of input and output plugins with a customisable configuration, and allows data insertion into InfluxDB in an automatic way – the input plugins from Telegraf will read data from an arbitrary source (e.g. a Kafkastream) and write it as metric data into InfluxDB. Currently it boasts with more than two hundred plugins, so usually no need to write your own.

  • Kapacitor – While Telegraf chains incompatible components to insert data into InfluxDB in an automatic way, Kapacitor adds intelligence, i.e. data processing, upon existing data within the database, and transformation of that data into some other form. Kapacitor also adds nifty features, such as rules that analyse the data and automatically send alerts to a user when data is out of predefined boundaries.

  • Chronograf – This is quite a handy component that allows for visualisation of data from the database, with a number of plugins that already have predefined widgets that will connect to a predefined schema for monitoring (e.g. for monitoring the health and metrics of various Apache components). Chronograf will also enable to user with a GUI for querying data from the database, and overall administration.


The area where InfluxDB really excels is its stellar performance – one single instance (equipped with 32 GB of RAM, 16 CPUs, and an SSD disk, which is recommended) achieves the insertion rate of about 650.000 values per second. The insertion rate actually goes up to 800 thousand values per second in certain tests, but an averaged insertion rate measured through several tests resulted in this lower value.

Likewise, an instance equipped with 16 GB of RAM, and 8 CPUs, has been tested to achieve the insertion rate of about 300.000 values per second. Note that these values have been measured with the series cardinality of about four to five million, which is quite high, and these instances have not been burdened with read or aggregate queries. Having a higher series cardinality will most definitely require more disk space, and especially more RAM and processing power, so even the official documentation mentions the “unfeasible” limit set to somewhere at ten million unique series, in which case a cluster (paid, enterprise solution) is needed. Since this fits our usage scenario, we were quite content with the numbers because a single instance achieves the throughput we needed, with the high series cardinality imposed on us (i.e. index size, because we needed quickly searchable data).

Of course, when you introduce read and aggregate queries – our tests have had a high number of aggregate queries (several thousand) that ran anew every ten minutes – the insertion rate fell down to about 200.000 values per second on the weaker instance, and to about 500.000 values per second on the more performant instance, respectively.