Move docker files to new installation - docker

Recently my Raspberry Pi 3b+ didn't boot correctly. I have decided to buy a new rpi 4 and wanted to transfer a Docker container (created with docker-compose) running Teslamate (self-hosted data logger for Tesla cars).
I have copied all the files of /var/lib/docker from the old SD, but actually I don't know how I can create again a container using all the previous data of Teslamate.
Additional informations:
Teslamate is written in Elixir
Data is stored in a Postgres database
Visualization and data analysis with Grafana
Vehicle data is published to a local MQTT Broker
Any clue?

Related

How to preserve docker cache on disconnected systems?

Synopsis. A remote instance gets connected to the Internet via satellite modem when technician visits the cabin. Technician setups the application stack via docker compose and leaves the location. The location has no internet connection and periodically loses electricity (once in a few days).
The application stack is typical, like mysql + nodejs. And it is used by "polar bears". I mean nobody, it is a monitoring app.
How to ensure that docker images will be persisted for an undefined amount of time and the compose stack survives through endless reboots?
Unfortunately there is no real easy solution.
But with a little bit of yq magic to parse docker-compose.yaml and docker save command it is possible to store the images locally to a specific location.
Then we can add startup script to import these images using docker load into the local docker cache.

How to migrate Nextcloud Docker to a new machine

I have a Nextcloud installation on a server that was installed using docker-compose. This installation utilizes a Nextcloud docker image and a separate MySQL (8.0) docker image for database access. The data and configuration files are placed in external volumes specified in the docker-compose.yml file.
I have recently put together a new machine that has more memory, a faster CPU, and (most importantly) much more disk space. I would like to migrate my current installation to the new machine.
The actual installation is simple enough: I can simply copy my docker-compose.yml file to the new machine and run it. The problem is with the data and the (somewhat unique) configuration that I have. I would like to get those onto the new machine.
The issue of migrating a dockerized Nextcloud installation has different issues from those associated with migrating a bare-metal or VM installation. For one thing, there is no clear way to place the installation into maintenance mode, you are working with two containers (effectively, this is like coordinating two different machines) and many of the steps described for migrating a bare-metal installation will not work reliably for a containerized installation (yes, one can go into the container to run some of the commands. required, but my attempts to do this resulted in screwed-up migrations).
Doing Google searches, I am seeing plenty of articles and instructions on how to migrate bare-metal Nextcloud installations from one machine to another, and how to migrate bare-metal (and virtual machine) installations to Docker. The procedures are pretty complex and involve placing the installation into maintenance mode and performing various backups and restores. Unfortunately, while I have seen a few people asking about how to migrate dockerized Nextcloud installations, there are no clear instructions on how to do this (at least, none that actually work!). Even the Nextcloud site does not discuss this!
Has anyone successfully migrated a dockerized Nextcloud installation from one machine to another? If so, how exactly was this done?
Was just able to do this myself, although I'm migrating my nextcloud install off my primary home server to a slower NAS-ish box I salvaged together after a move.
The main issues I ran into were file/dir ownership moving from one machine to another. Secondary was ensuring trusted domains were set correctly in config.php
I'm sure it'd be better to use rsync to copy/move files from machine to machine and ensure you keep ownership intact, but I used scp and changed ownership manually. Your nextcloud_data container needs the www-data user to have ownership of the dir you have mapped to /var/www/html and the nextcloud_db (I use mariadb here, YMMV) container needs the systemd-coredump user to have ownership of the dir you have mapped to /var/lib/mysql (or whatever your db backend equivalent is)
Then just make sure you switch over your trusted_domains and trusted_proxies, either using docker-compose env vars, or by editing /var/www/html/config/config.phpdirectly.
Based on Raphael PICCOLO's comments, I created a tarball of everything in the Volumes I was using for my original installation, created a new installation on my target machine, then extracted the tarball on the new machine. There is, however, one other step that must be taken if you do it this way: you must change the ownership of all the files in the tarball so that they are owned by the userID used by the new Nextcloud installation. Otherwise, the new Nextcloud applications will be unable to access any of the resources and attempts to even log in will get 500 Failures on a browser.
There is also a unique ID utilized by the MySQL container, so all the database- related data files must also undergo an ownership change.
Getting the correct userIDs is simple enough: when you first install the new Nextcloud and MySQL database, use the same volumes you had set up in the original docker-compose.yml file. Then, before untaring the data look at the userIDs of the files in the database folder and the Nextcloud folders. TThen when you put the contents of your tarball on the new installation, use chown -R to make the owership changes.
Note that I was transferring my installation from a Centos 7 machine running Docker with the traditional root user to a Centos 8 machoine running Docker in a "non- root user" mode. I do not know how permissions would be affected on other machines or modes.
Still, once the permissions were properly set up, everything works.

cloudant docker developer edition issue with replicate

We have a local dev edition running locally in a stock docker image, all is well and good.
We cannot get test items re: replication (even internally from 1 db to another inside docker image or further to cloudant.com to replicate).
I am aware the image license is for a single non cluster node, but is there a way to push docs etc from a local dev db to cloudant.com db on a one time push? Or test replication development locally? (ie 2 dbs inside docker image)
Essentially does "non-clusterable" = no one way, one time, push replication? Even internally inside the image from 1 db to another db in the same docker image?
Here is info re: image- https://hub.docker.com/r/ibmcom/cloudant-developer/
I'm not 100% clear on the exact issue you are seeing, but replication should work when running Cloudant inside Docker. You just need to understand how to route to your Cloudant instance.
I noticed that when I create a new local replication in the Cloudant Dashboard it uses the port that is in the Cloudant Dashboard URL which is the port mapped in Docker. For example, I map port 80 to 30080. When I try to replicate from database test1 to a new database called test2 it creates a replication from localhost:30080/test1 to localhost:30080/test2. This will not work because the Cloudant instance thinks everything is running on port 80 not port 30080.
So, my work around was to tell the Cloudant dashboard to do a remote replication, but specify localhost/test1 (equivalent to localhost:80/test1) to localhost/test2. See the screenshot below:

How do I use Docker on cloud or datacenter

I couldn't have enough courage to start using docker now I'm feel like came from last century. I want to clear my doubts about docker before get started. My question is mainly for deploying/running docker images on cloud or hosting environment.
Can I build a docker image with any type of server (eg. wildfly, payara) and/or database server (eg. mysql, oracle) and will it work on docker enabled cloud/datacenter?
If it's yes how about persistent datas like database files and static storages (eg. images, uploaded documents, logs) those are stored in docker images or somewhere else? What will happen to those files when I update my application and redeploy new image?
I read posts about what is docker but I couln't find specific answer. Forgive me for not doing enough googling.
I have run docker on AWS and other cloud providers. It is really not that hard if you have some experience with system administration and or devops. Regarding cloud hosters and getting started, most providers have some sort of tutorial on how to get started using docker with their infrastructure:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-dockerextension/
Can I build a docker image with any type of server (eg. wildfly,
payara) and/or database server (eg. mysql, oracle) and will it work on
docker enabled cloud/datacenter?
To get a server up and running, you just need the docker engine installed on the host, there are packages for many distros:
https://docs.docker.com/engine/installation/
After docker engine is installed, you can create dockerfiles for basically any server or service. Hopefully you do not need to, in most cases, since there are countless docker files and pre-configured, vendor maintained images already available on dockerhub (I use wildfly, elk-stack, and mysql for example). Be careful about selecting images are maintained, otherwise you end up with security issues in your images that might never get fixed! Or you have to do it yourself!
Example images:
https://hub.docker.com/r/jboss/wildfly/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/oraclelinux/
https://hub.docker.com/u/payara/
If it's yes how about persistent datas like database files and static
storages (eg. images, uploaded documents, logs) those are stored in
docker images or somewhere else? What will happen to those files when
I update my application and redeploy new image?
In general, you will want to store persistent data external to the docker image and mount it into the image as a volume:
https://docs.docker.com/engine/tutorials/dockervolumes/
Some cloud based storage providers might be easier to mount or connect to in other ways, but this volume approach is standard, IMO.
For logfiles, I actually push them to an ELK server, so having a volume for the logs is not necessarily required. However, since the ELK server is also a docker image, it does have a volume where the data is persisted.
So you have:
documentation from your cloud hoster (or docker themselves)
a host in your cloud running docker engine
0..n images that you can either grab from dockerhub or build yourself.
storage for persistent data on this host or mounted from elsewhere that you mount into your docker images on startup. this is where e.g. mysql data folders live, or where you can persist logs, etc.
Of course, it can get much more complex from there, e.g. how to transparently scale and update your environment etc., but that is something for e.g. kubernetes or docker swarm or some other solution (I've scripted a bit on my own but do not need the robustness or elastic scalability of large systems).
Regarding cluster management, it should be noted that Swarm is now included in the Docker Core. This has created some controversy in the community and even talks of a fork of the core:
https://technologyconversations.com/2015/11/04/docker-clustering-tools-compared-kubernetes-vs-docker-swarm/
https://jaxenter.com/docker-1-12-is-probably-the-most-important-release-since-1-0-129080.html
http://searchitoperations.techtarget.com/news/450303918/Docker-fork-talk-prompts-container-standardization-brawl
http://www.infoworld.com/article/3118345/cloud-computing/why-kubernetes-is-winning-the-container-war.html
I have experience running docker on Alibaba cloud and AWS as well. I did not see any difference in working with docker on both cloud providers. Docker images can be build same way on all linux platform regardless of the cloud provider. However, persistence of data need to be taken care using docker volumes. However, it is recommended to use managed service such as RDS in Alibaba cloud for databases instead of using docker.
Can I build a docker image with any type of server (eg. wildfly,
payara) and/or database server (eg. mysql, oracle) and will it work on
docker enabled cloud/datacenter?
You can build your own Docker images or use solutions that are already pre-packaged and proven by cloud providers. For example, here is an auto-clustering Docker-based implementation of GlassFish that can be run and managed on Jelastic PaaS.
If it's yes how about persistent datas like database files and static
storages (eg. images, uploaded documents, logs) those are stored in
docker images or somewhere else? What will happen to those files when
I update my application and redeploy new image?
With the above mentioned cluster, all data is kept inside containers and stays without changes after restart. As an option, you can also connect a separate data storage container if you wish to share it across other containers.

Using host filesystem as a read-only base in docker

In docker, is it possible to use part of the host's filesystem to be mounted as read-only in the docker image but any write on it will be on the COW/UFS layer? Below is the usecase I am looking at.
1) We have a proprietary product that takes forever to install with lots of manual intervention. However once the install base is completed the core files are almost not changed as it allows a node-level configuration to be placed in a separate directory that just references the install base. Of course if we need to update the core files then it will be on the host. The core installation will take up about 8GB file space on the host machine.
The host core installation may be virtualized (VMWare or VirtualBox).
2) The core installation would also write its metadata on a database, and each created node will write additional metadata stuff on it. If the DB installation is on the host, can docker run the DB process on a docker image and just reference the DB binaries and data partition as read-only, but write its changes on the data partition on the layer?
If it helps here is a sample relationship I am looking at:
-> Host is a VirtualBox running CentOS, and has the installation of proprietary product and its database.
-> Container A1 will spawn a database process based on the existing database state (empty except for the metadata made during installation).
-> Container A2 will spawn a product process, create the product node using the database offered by A1, and run the build,test,deploy routines.
I need to spawn multiple pairs of the node+database on demand for continuous integration. The setup above should allow me to bring up Container pairs for each isolated node that is needed by our development team. Theoretically I can mount the product base directory as read/write but I think there will be some operations that write data on it (e.g. logs) that I would like to be done on the product process layer instead.
Thanks.

Resources