Export data from InfluxDB - influxdb

Is there a way (plugin or tool) to export the data from the database (or database itself) ? I'm looking for this feature as I need to migrate a DB from present host to another one.

Export data:
sudo service influxdb start (Or leave this step if service is already running)
influxd backup -database grpcdb /opt/data
grpcdb is name of DB and back up will be saved under /opt/data directory in this case.
Import Data:
sudo service influxdb stop (Service should not be running)
influxd restore -metadir /var/lib/influxdb/meta /opt/data
influxd restore -database grpcdb -datadir /var/lib/influxdb/data /opt/data
sudo service influxdb start

You could dump each table and load them through REST interface:
curl "http://hosta:8086/db/dbname/series?u=root&p=root&q=select%20*%20from%20series_name%3B" > series_name.json
curl -XPOST -d #series_name.json "http://hostb:8086/db/dbname/series?u=root&p=root"
Or, maybe you want to add new host to cluster? It's easy and you'll get master-master replica for free. Cluster Setup

If I use curl, I get timeouts, and if I use influxd backup its not in a format I can read.
I'm getting fine results like this:
influx -host influxdb.mydomain.com -database primary -format csv -execute "select time,value from \"continuous\" where channel='ch123'" > outtest.csv

As ezotrank says, you can dump each table. There's a missing "-d" in ezotrank's answer though. It should be:
curl "http://hosta:8086/db/dbname/series?u=root&p=root&q=select%20*%20from%20series_name%3B" > series_name.json
curl -XPOST -d #series_name.json "http://hostb:8086/db/dbname/series?u=root&p=root"
(Ezotrank, sorry, I would've just posted a comment directly on your answer, but I don't have enough reputation points to do that yet.)

From 1.5 onwards, the InfluxDB OSS backup utility provides a newer option which is much more convenient:
-portable: Generates backup files in the newer InfluxDB Enterprise-compatible format. Highly recommended for all InfluxDB OSS users
Export
To back up everything:
influxd backup -portable <path-to-backup>
To backup only the myperf database:
influxd backup -portable -database myperf <path-to-backup>
Import
To restore all databases found within the backup directory:
influxd restore -portable <path-to-backup>
To restore only the myperf database (myperf database must not exist):
influxd restore -portable -db myperf <path-to-backup>
Additional options include specifying timestamp , shard etc. See all the other supported options here.

If You want to export in an readable format, the inspect command is to prefer.
To export the database with the name HomeData the command is:
sudo influx_inspect export -waldir /var/lib/influxdb/wal -datadir /var/lib/influxdb -out "influx_backup.db" -database HomeData
The parameters for -waldir and -datdir can be found in /etc/influxdb/influxdb.conf.
To import this file again, the command is:
influx -import -path=influx_backup.db

If you have access to the machine running Influx db I would say use the influx_inspect command. The command is simple and very fast. It will dump your db in line protocol. You can then import this dump using influx -import command.

Related

How to restore influxdb from local backup || InfluxDB

I am trying to import client's provided influx db data backup to my local influx db. But getting following error.
ENV: Ubuntu
Influx DB service is already running.
enter image description here
Trying to restore influx db backup.
First of all you are using the wrong command to execute influxdb task. It should be influxd instead of influx. Please follow the steps below to restore your database.
Check your data-dir path by executing: influxd config. Copy the data-dir section.
See database name by executing: SHOW Databases
Execute to restore: influxd restore -database {database_name} -data-dir {your_data_dir_path} /path/to/your/backup

PG_Restore local backup to docker container using pipe viewer

I was using the following command to restore from a backup:
pv backup.sql.gz | gunzip | pg_restore -d $DBHOST
This was helpful because I can roughly see how far a long the backup has to finish.
I recently moved the DB into a docker container and wanted to be able to run the same restore, but have been struggling getting the command to work. I'm assuming I have to redirect the gunzip output somehow, but haven't been having any luck. Any help would be appreciated.

Magento, database dump

I am trying to get db dump by command
docker exec container-name sh -c 'exec mysqldump --all-databases -uroot -p""' > db-backups/some-dump-name.sql
and I am getting
Got error: 2002: "Can't connect to local MySQL server through socket '/opt/bitn
ami/mysql/tmp/mysql.sock' (2)" when trying to connect
Magento runs on this image. Any ideas what could be wrong? I can provide more details if needed.
Bitnami Engineer here,
You also need to set the hostname of the database when backing up the databases. The Magento container doesn't include a database server, it uses an external one.
You probably specified that using the MARIADB_HOST env variable. If you used the docker-compose.yml file we provide, that hostname is mariadb.
exec mysqldump --all-databases -uroot -h HOSTNAME -p""

Changing neo4j conf enviroment variable has no effect

I use neo4j on ubuntu. I want to have two graph db, for regular use and tests.
I read article about how switching between two graph db.
I do steps base on article:
cp /etc/neo4j/neo4j.conf /etc/neo4j/neo4j_test/neo4j.conf
# change dbms.active_database=graph.db to # change dbms.active_database=graph_test.db
sudo vim /etc/neo4j/neo4j_test/neo4j.conf
export NEO4J_CONF="/etc/neo4j/neo4j_test"
sudo systemctl restart neo4j
But when I check logs:
sudo journalctl -f -u neo4j
Config is default conf and didn't changed:
Sep 17 11:18:33 pc2 neo4j[32657]: config: /etc/neo4j
What is my fault? and is another way to switch between 2 graph db?
I think you could not save it properly using vim. neo4j.conf is a read-only file. So you can use this command to save read-only file :w !sudo tee %
You can see this question also:E212: Can't open file for writing

Store and its lock file has been locked by another process: /var/lib/neo4j/data/databases/graph.db/store_lock

what I did
neo4j console
(work fine)
ctrl-C
upon restarting I have message above.
I delete /var/lib/neo4j/data/databases/graph.db/store_lock
then I have
Externally locked: /var/lib/neo4j/data/databases/graph.db/neostore
Is there any way of cleaning lock ? (short of reinstalling)
Killing the Java process and deleting the store_lock worked for me:
Found the lingering process,
ps aux | grep "org.neo4j.server"
killed it,
kill -9 <pid-of-neo4js-java-process>
and deleted
sudo rm /var/lib/neo4j/data/databases/graph.db/store_lock
Allegedly, just killing the lingering process may do the trick but I went ahead and deleted the lock anyway.
You can kill the java process and delete the store_lock file. It doesn't seem to harm the database integrity.
I found this question having same error message trying to import CSV using neo4j-admin tool.
In my case the problem was that I first launched neo4j server:
docker run -d --name testneo4j -p7474:7474 -p7687:7687 -v /path/to/neo4j/data:/data -v /path/to/neo4j/logs:/logs -v /path/to/neo4j/import:/var/lib/neo4j/import -v /path/to/neo4j/plugins:/plugins --env NEO4J_AUTH=neo4j/test neo4j:latest
and then tried to launch import (CSV data files see here):
docker exec -it testneo4j neo4j-admin import --nodes=Movies=import/movies.csv --nodes=Actors=import/actors.csv --relationships=ACTED_IN=import/roles.csv
This leads to lock error since server acquires database lock and neo4j-admin is independent tool which needs to acquire database lock too. Kills, lock file removals and sudos didn't work for me.
What helped:
docker run --rm ... neo4j:latest neo4j-admin ... - this performs one-shot import into empty database. No dead container remains, only imported data in external volume. (Note the command fails if db is not empty.) The point is, the Docker entrypoint starts server unless CMD in Dockerfile is overriden.
docker run -d ... neo4j:latest - this runs neo4j server

Resources