How to transfer an influx database from one host to another? - influxdb

I have an existing influx database on one of my raspberry pis, and I want to transfer it to another pi that is taking over.
The information I did find wasn't helpful as it had one command to export (using influxd backup) which created a series of files in a directory, but the import command I tried was expecting a single file so didn't work.
Some reading seemed to show that there is a couple of methods, and some referred to the web gui (which I cannot get to work). Is there a pointer to the latest and best way to do this now? Preferably from the command line.
Thanks,
-Steve

Related

How can I move my appwrite image alongside all the projects to a seperate computer?

I am new to this whole containarization and backend as a services technologies. But I wanted to give appwrite a shot because it seemed very easy and well suited for a small project I am about to build. The only problem is I donot know that much about docker, and I am a bit unsure if and how will I be able to move the appwrite image instance that is running locally with all the changes that I have made to it (i.e. created projects, existing db documents, functions etc) to production server or any other computers. How might I be able to do this? thanks
If you're looking to move the configuration for your project AND the data, the best thing to do would be to:
Backup your project
Move the backups and the appwrite folder to the new location
Start Appwrite
Restore the backup
If you only need to migrate the schema for your collections, you can use the Appwrite CLI to create an appwrite.json file of your project and then deploy it to another instance. The CLI can also be used to manage your Appwrite Functions too.
References:
YouTube Video on Backing Up and Restoring
Docs on Backups
Docs on Appwrite CLI

How to Dockerize multiple scripts that share requirements.txt and need frequent update

I'm new to Docker so I want to find best practices for my specific problem.
PROBLEM:
I have 6 python web-scraping scripts that run on same libraries (same requiraments.txt).
My scripts would need frequent updating (few times per week).
Also, my scripts have excel files from which they read and write stuff to, and I need to be able to update that excel files from time to time.
SOLUTIONS?
Do I really need 6 images and 6 containers even doe my containers will have same libraries? I find it time consuming to delete container and image every time I update my code.
For accessing files my excel files, I read about VOLUMES and I intend to implement them. Is that good solution?
Do I really need 6 images and 6 containers even doe my containers will have same libraries?
It depends on technical possibility and personal preference. If you find a good, maintainable way to run all scripts in one Docker container, there's no reason you cannot do it. You could easily use a cron-like solution such as this image.
There are advantages to keeping Docker images single-purpose, though. One of them is clear isolation. If one of your scripts fails to run, you'll have one failing container only and five others that still run successfully. Plus you have full transparency over what exactly fails where.
I find it time consuming to delete container and image every time I update my code.
I would propose to use some CI pipeline to do things like this. The pipeline would automatically build the images on a push, publish them to a registry and recreate the containers/services on your server.
For accessing files my excel files, I read about VOLUMES and I intend to implement them. Is that good solution?
Yes, that's what volumes were made for: Accessing and storing data that isn't part of your image.

Way to export neo 4j graph to rebuild the database

I'm working with the neo4j graph database these days for a project and as a precaution, I need to find out that is there a way to export the graph every time I build or make a change in the graph in case an accidental deletion occurs I can rebuild the graph. For example, in MySQL, we can export the database into a SQL script and rebuild the database by running it. What I'm asking is, is there a way in neo4j to do the same thing?
PS:- I use an online sandbox provided by graphenedb.com. Not the one installed locally in the computer.
You could use the export feature. That export file can be used within GrapheneDB to restore it into any other database or to take your data elsewhere.

How to migrate my local neo4j dataset to the grapheneDB instance

I am currently working on a project and It's time for me to host both the application and my graph database. I have chosen heroku and I have been able to deploy my application, add an add-on (GrpaheneDB). Now I would like to migrate my local dataset on my online database. I have been searching for two days now.
Every time I try restoring the database, I get this error:
To quote from GrapheneDB's troubleshooting section for importing:
When a restore process fails, it’s usually due to one of the following
reasons:
The store files were copied while Neo4j is still running: Make sure
Neo4j is stopped.
The store files correspond to a newer version of
Neo4j than the one on GrapheneDB: Make sure you restore to the same
version or higher.
The compressed file is not a supported format: Make
sure you use one of our supported formats, which include zip, tar,
cpio, gz, bz2 and xz.
There are store files missing within the
compressed file: Make sure the archive contains the full graph.db
directory and all files inside (use the recursive option when creating
the archive).

TF400018: Local version table locked

Got the below in TFS and VS 2012 RC, anyone know of a fix? Doesn't seem to exist on MS website.
TF400018: The local version table for the local workspace
COMPUTERNAME;MYNAME could not be opened. The process cannot access the
file because it is being used by another process
Any suggestions welcomed.
We experienced this one as well. Migrating to the RTM makes this happen a lot less, but it can still happen a lot.
When using local workspaces (a new feaure in vs 2012) a local file based database is created to administer changes you make localy. When you change a source file, this file base database needs to be updated. If this update conflicts with the normal update task which routinely checks for changes you get this error. The cause of this issue is usually that you are using local workspace for more items than it was intended or that your disk I/O is too slow.
Workarounds for this are either:
Replace your disk with an ssd. Having better I/O makes this issue
happen a lot less.
Switch back to server based workspaces. (which handles this better)
Use the TFS-GIT connector and use git for offline support.
Split your workspace mapping in portions so they contain less items.
Please delete the files under %Temp% folder and open the project as
"Run as Administrator " mode .It works for me .
Regards,
Kamaraj

Resources