How to migrate Nextcloud Docker to a new machine - docker

I have a Nextcloud installation on a server that was installed using docker-compose. This installation utilizes a Nextcloud docker image and a separate MySQL (8.0) docker image for database access. The data and configuration files are placed in external volumes specified in the docker-compose.yml file.
I have recently put together a new machine that has more memory, a faster CPU, and (most importantly) much more disk space. I would like to migrate my current installation to the new machine.
The actual installation is simple enough: I can simply copy my docker-compose.yml file to the new machine and run it. The problem is with the data and the (somewhat unique) configuration that I have. I would like to get those onto the new machine.
The issue of migrating a dockerized Nextcloud installation has different issues from those associated with migrating a bare-metal or VM installation. For one thing, there is no clear way to place the installation into maintenance mode, you are working with two containers (effectively, this is like coordinating two different machines) and many of the steps described for migrating a bare-metal installation will not work reliably for a containerized installation (yes, one can go into the container to run some of the commands. required, but my attempts to do this resulted in screwed-up migrations).
Doing Google searches, I am seeing plenty of articles and instructions on how to migrate bare-metal Nextcloud installations from one machine to another, and how to migrate bare-metal (and virtual machine) installations to Docker. The procedures are pretty complex and involve placing the installation into maintenance mode and performing various backups and restores. Unfortunately, while I have seen a few people asking about how to migrate dockerized Nextcloud installations, there are no clear instructions on how to do this (at least, none that actually work!). Even the Nextcloud site does not discuss this!
Has anyone successfully migrated a dockerized Nextcloud installation from one machine to another? If so, how exactly was this done?

Was just able to do this myself, although I'm migrating my nextcloud install off my primary home server to a slower NAS-ish box I salvaged together after a move.
The main issues I ran into were file/dir ownership moving from one machine to another. Secondary was ensuring trusted domains were set correctly in config.php
I'm sure it'd be better to use rsync to copy/move files from machine to machine and ensure you keep ownership intact, but I used scp and changed ownership manually. Your nextcloud_data container needs the www-data user to have ownership of the dir you have mapped to /var/www/html and the nextcloud_db (I use mariadb here, YMMV) container needs the systemd-coredump user to have ownership of the dir you have mapped to /var/lib/mysql (or whatever your db backend equivalent is)
Then just make sure you switch over your trusted_domains and trusted_proxies, either using docker-compose env vars, or by editing /var/www/html/config/config.phpdirectly.

Based on Raphael PICCOLO's comments, I created a tarball of everything in the Volumes I was using for my original installation, created a new installation on my target machine, then extracted the tarball on the new machine. There is, however, one other step that must be taken if you do it this way: you must change the ownership of all the files in the tarball so that they are owned by the userID used by the new Nextcloud installation. Otherwise, the new Nextcloud applications will be unable to access any of the resources and attempts to even log in will get 500 Failures on a browser.
There is also a unique ID utilized by the MySQL container, so all the database- related data files must also undergo an ownership change.
Getting the correct userIDs is simple enough: when you first install the new Nextcloud and MySQL database, use the same volumes you had set up in the original docker-compose.yml file. Then, before untaring the data look at the userIDs of the files in the database folder and the Nextcloud folders. TThen when you put the contents of your tarball on the new installation, use chown -R to make the owership changes.
Note that I was transferring my installation from a Centos 7 machine running Docker with the traditional root user to a Centos 8 machoine running Docker in a "non- root user" mode. I do not know how permissions would be affected on other machines or modes.
Still, once the permissions were properly set up, everything works.

Related

Is it possible to use a single docker volume to map to two different directories?

I am using VS Code, Remote Development Containers, and Docker to create development environments within containers. Everything works fine, but I did notice that when working with different projects that doing things such as yarn install means having to download the npm modules each time. Of course, once a container image does this they are stored in the cache, specifically /usr/local/share/.cache/yarn/v6.
When I attempted to mount that folder to the host machine yarn install would start to fail too often, stating that it was having trouble downloading the package due to a bad network connection (the connection was just fine). So, I created a volume instead and everything worked just fine.
The problem I am running into is that I also want to share other folders in the volume so that multiple containers use the same cache for things such as NuGet packages. I was hoping to somehow have my volume look like so:
mysharedvolume/yarn => /usr/local/share/.cache/yarn/v6
mysharedvolume/nuget => /wherever/nuget/packages/are/cached
mysharedvolume/somefile.config => /wherever/somefile.config
This does not seem to be the way volumes work in docker, all of the files are mixed up at the root of the volume (there is no subdirectories). Of course, I can't simply map the entire /usr folder or anything like that, that's crazy.
Before I go off and create different volumes for each cache and config files, is there a way to do this with a single shared volume?

Avoiding a constant "stop, build, up" loop when developing with Docker locally

I've recently delved into the wonder that is Docker and have set up my containers using docker-compose (an Express app and a MySQL DB).
It's been great for sharing the project with the team and pushing to a VPS, but one thing that's fast becoming tedious is the need to stop the running app, docker-compose build then docker-compose up any time there are changes to the code (which I believe is also creating numerous unnecessary images?).
I've scoured about but haven't found a clear-cut way to get around this, barring ditching Docker-compose locally and using docker run to run the Express app pointing to a local DB (which would do away with a lot of the easy set up perks that come with Docker, such as building the DB from scratch).
Is there a Nodemon-style way of working with Docker (images/containers get updates automatically when code changes)? Is there something obvious I'm missing? Or is my approach the necessary "evil" that comes with working on a Dockerised app?
You can mount a volume to your source directory for local development. Any changes on the host will be reflected in the container. https://docs.docker.com/storage/volumes/
You might consider separate services for deployment/development. I usually have a separate service which mounts the source directory and installs test dependencies inside the container.

keep CDH container running

I am learning CDH and Docker and didn't have prior experiene in setting up both tools. After reading documentation i managed to run CDH docker in mac environment and also completed example given in quick start guid. But when next day when i started mac book again to learn something new but i didn't find my previous work which i found very strange and even couldn't see container running which seems fine to me.
What i really want to do is i don't want to loose my work even after stoping docker container. could you please guid me how do i configure docker so that i will not loose my work even after restarting docker again?
Every instance of a docker run will allocate a new filesystem, essentially starting from scratch.
If you actually want to "save" your work, then you need to volume mount (using -v docker flag) your local filesystem into the container for at least the following directories.
HDFS Data Directory
NameNode Data Directory
/home/cloudera
I think the hadoop data folders are somewhere under /var/lib/hadoop-*, by default
The better alternative for saving your workloads would be the CDH VM, where it actually has a persistent HDD associated with it.

Use docker to migrate a system

I have an aws ec2 account, where I am running couple of web apps on nginx. I don't know much about docker, except it is a container that takes snapshot of filesystem. Now, for some reason I am forced to switch accounts. I have opened a new aws ec2 account. Can I use docker to set up a container in my old virtual system, then get an image and deploy in my new system? This way I can remove the headache of having to install many components, configure nginx and all applications in my new system. Can I do that? If so, how?
According to the best practices of Docker and its CaaS, images are not supposed to "virtualize" a whole lot of services, on the contrary. Docker does not aim at taking a snapshot of the system (it uses FS overlay to create images, but theses are not snapshots).
So basically, if your (yet unclear) question is: "Can I virtualize my whole system into one image" the answer is: "No".
What you can do is using an image for each of your service (you'll find everything you need on the hub.docker) to keep a clean system on your new one.
Another solution would be to list all the installed Linux packages on your old system, and installed them on the new one and copy all the configuration files.

What would be a good docker webdev workflow?

I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.

Resources