influx write
- influx CLI 2.0.0+
- InfluxDB 2.0.0+
- Updated in CLI v2.0.5
To write data to InfluxDB, you must provide the following for each row:
- measurement
- field
- value
Line protocol
Annotated CSV
In annotated CSV, measurements, fields, and values are represented by the _measurement
, _field
, and _value
columns. Their types are determined by CSV annotations. To successfully write annotated CSV to InfluxDB, include all annotation rows.
Extended annotated CSV
Flag | Description | Input type | Maps to ? | |
---|---|---|---|---|
-c | —active-config | CLI configuration to use for command | string | |
-b | —bucket | Bucket name (mutually exclusive with —bucket-id ) | string | INFLUX_BUCKET_NAME |
—bucket-id | Bucket ID (mutually exclusive with —bucket ) | string | INFLUX_BUCKET_ID | |
—configs-path | Path to influx CLI configurations (default ~/.influxdbv2/configs ) | string | INFLUX_CONFIGS_PATH | |
—compression | Input compression (none or gzip , default is none unless input file ends with .gz .) | string | ||
—debug | Output errors to stderr | |||
—encoding | Character encoding of input (default UTF-8 ) | string | ||
—error-file | Path to a file used for recording rejected row errors | string | ||
-f | —file | File to import | stringArray | |
—format | Input format (lp or csv , default lp ) | string | ||
—header | Prepend header line to CSV input data | string | ||
-h | —help | Help for the write command | ||
—host | HTTP address of InfluxDB (default ) | string | INFLUX_HOST | |
—max-line-length | Maximum number of bytes that can be read for a single line (default 16000000 ) | integer | ||
-o | —org | Organization name (mutually exclusive with —org-id ) | string | INFLUX_ORG |
—org-id | Organization ID (mutually exclusive with —org ) | string | INFLUX_ORG_ID | |
-p | —precision | Precision of the timestamps (default ns ) | string | |
—rate-limit | Throttle write rate (examples: 5 MB / 5 min or 1MB/s ). | string | ||
—skip-verify | Skip TLS certificate verification | INFLUX_SKIP_VERIFY | ||
—skipHeader | Skip first n rows of input data | integer | ||
—skipRowOnError | Output CSV errors to stderr, but continue processing | |||
-t | —token | API token | string | INFLUX_TOKEN |
-u | —url | URL to import data from | stringArray |
Authentication credentials
The examples below assume your InfluxDB host, organization, and token are provided by the active influx CLI configuration. If you do not have a CLI configuration set up, use the appropriate flags to provide these required credentials.
Write line protocol
Write CSV data
Write line protocol via stdin
influx write --bucket example-bucket "
m,host=host1 field1=1.2
m,host=host2 field1=2.4
m,host=host1 field2=5i
m,host=host2 field2=3i
"
Write line protocol from a file
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt
Write line protocol from multiple files
influx write \
--bucket example-bucket \
--file path/to/line-protocol-1.txt \
--file path/to/line-protocol-2.txt
Write line protocol from a URL
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol.txt
Write line protocol from multiple URLs
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol-1.txt \
--url https://example.com/line-protocol-2.txt
Write line protocol from multiple sources
Write line protocol from a compressed file
# The influx CLI assumes files with the .gz extension use gzip compression
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt.gz
# Specify gzip compression for gzipped files without the .gz extension
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt.comp \
--compression gzip
Write annotated CSV data via stdin
influx write \
--bucket example-bucket \
--format csv \
"#group,false,false,false,false,true,true
#datatype,string,long,dateTime:RFC3339,double,string,string
#default,_result,,,,,
,result,table,_time,_value,_field,_measurement
,,0,2020-12-18T18:16:11Z,72.7,temp,sensorData
,,0,2020-12-18T18:16:31Z,72.7,temp,sensorData
,,0,2020-12-18T18:16:41Z,72.8,temp,sensorData
,,0,2020-12-18T18:16:51Z,73.1,temp,sensorData
"
Write extended annotated CSV data via stdin
influx write \
--bucket example-bucket \
"#constant measurement,sensorData
#datatype,datetime:RFC3339,double
time,temperature
2020-12-18T18:16:11Z,72.7
2020-12-18T18:16:21Z,73.8
2020-12-18T18:16:31Z,72.7
2020-12-18T18:16:41Z,72.8
2020-12-18T18:16:51Z,73.1
"
Write annotated CSV data from a file
influx write \
--bucket example-bucket \
--file path/to/data.csv
Write annotated CSV data from multiple files
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--file path/to/data-2.csv
Write annotated CSV data from a URL
Write annotated CSV data from multiple URLs
influx write \
--bucket example-bucket \
--url https://example.com/data-1.csv \
--url https://example.com/data-2.csv
Write annotated CSV data from multiple sources
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--url https://example.com/data-2.csv
Prepend CSV data with annotation headers
influx write \
--bucket example-bucket \
--header "#constant measurement,birds" \
--header "#datatype dateTime:2006-01-02,long,tag" \
--file path/to/data.csv
Write annotated CSV data from a compressed file
# The influx CLI assumes files with the .gz extension use gzip compression
influx write \
--bucket example-bucket \
--file path/to/data.csv.gz
# Specify gzip compression for gzipped files without the .gz extension
influx write \
--bucket example-bucket \
--file path/to/data.csv.comp \
--compression gzip
Write annotated CSV data using rate limiting
influx write \
--bucket example-bucket \
--file path/to/data.csv \
--rate-limit 5 MB / 5 min