Is it possible to use a single docker volume to map to two different directories? - docker

I am using VS Code, Remote Development Containers, and Docker to create development environments within containers. Everything works fine, but I did notice that when working with different projects that doing things such as yarn install means having to download the npm modules each time. Of course, once a container image does this they are stored in the cache, specifically /usr/local/share/.cache/yarn/v6.
When I attempted to mount that folder to the host machine yarn install would start to fail too often, stating that it was having trouble downloading the package due to a bad network connection (the connection was just fine). So, I created a volume instead and everything worked just fine.
The problem I am running into is that I also want to share other folders in the volume so that multiple containers use the same cache for things such as NuGet packages. I was hoping to somehow have my volume look like so:
mysharedvolume/yarn => /usr/local/share/.cache/yarn/v6
mysharedvolume/nuget => /wherever/nuget/packages/are/cached
mysharedvolume/somefile.config => /wherever/somefile.config
This does not seem to be the way volumes work in docker, all of the files are mixed up at the root of the volume (there is no subdirectories). Of course, I can't simply map the entire /usr folder or anything like that, that's crazy.
Before I go off and create different volumes for each cache and config files, is there a way to do this with a single shared volume?

Related

Should I use the application code inside the docker images or in volumes?

I am working on a Devops project. I want to find the perfect solution.
I have a conflict between two solutions. should I use the application code inside the docker images or in volumes?
Your code should almost never be in volumes, developer-only setups aside (and even then). This is doubly true if you have a setup like a frequent developer-only Node setup that puts the node_modules directory into a Docker-managed anonymous volume: since Docker will refuse to update that directory on its own, the primary effect of this is to cause Docker to ignore any changes to the package.json file.
More generally, in this context, you should think of the image as a way to distribute the application code. Consider clustered environments like Kubernetes: the cluster manager knows how to pull versioned Docker images on its own, but you need to work around a lot of the standard machinery to try to push code into a volume. You should not need to both distribute a Docker image and also separately distribute the code in the image.
I'd suggest using host-directory mounts for injecting configuration files and for storing file-based logs (if the container can't be configured to log to stdout). Use either host-directory or named-volume mounts for stateful containers' data (host directories are easier to back up, named volumes are faster on non-Linux platforms). Do not use volumes at all for your application code or libraries.
(Consider that, if you're just overwriting all of the application code with volume mounts, you may as well just use the base node image and not build a custom image; and if you're doing that, you may as well use your automation system (Salt Stack, Ansible, Chef, etc.) to just install Node and ignore Docker entirely.)

How to migrate Nextcloud Docker to a new machine

I have a Nextcloud installation on a server that was installed using docker-compose. This installation utilizes a Nextcloud docker image and a separate MySQL (8.0) docker image for database access. The data and configuration files are placed in external volumes specified in the docker-compose.yml file.
I have recently put together a new machine that has more memory, a faster CPU, and (most importantly) much more disk space. I would like to migrate my current installation to the new machine.
The actual installation is simple enough: I can simply copy my docker-compose.yml file to the new machine and run it. The problem is with the data and the (somewhat unique) configuration that I have. I would like to get those onto the new machine.
The issue of migrating a dockerized Nextcloud installation has different issues from those associated with migrating a bare-metal or VM installation. For one thing, there is no clear way to place the installation into maintenance mode, you are working with two containers (effectively, this is like coordinating two different machines) and many of the steps described for migrating a bare-metal installation will not work reliably for a containerized installation (yes, one can go into the container to run some of the commands. required, but my attempts to do this resulted in screwed-up migrations).
Doing Google searches, I am seeing plenty of articles and instructions on how to migrate bare-metal Nextcloud installations from one machine to another, and how to migrate bare-metal (and virtual machine) installations to Docker. The procedures are pretty complex and involve placing the installation into maintenance mode and performing various backups and restores. Unfortunately, while I have seen a few people asking about how to migrate dockerized Nextcloud installations, there are no clear instructions on how to do this (at least, none that actually work!). Even the Nextcloud site does not discuss this!
Has anyone successfully migrated a dockerized Nextcloud installation from one machine to another? If so, how exactly was this done?
Was just able to do this myself, although I'm migrating my nextcloud install off my primary home server to a slower NAS-ish box I salvaged together after a move.
The main issues I ran into were file/dir ownership moving from one machine to another. Secondary was ensuring trusted domains were set correctly in config.php
I'm sure it'd be better to use rsync to copy/move files from machine to machine and ensure you keep ownership intact, but I used scp and changed ownership manually. Your nextcloud_data container needs the www-data user to have ownership of the dir you have mapped to /var/www/html and the nextcloud_db (I use mariadb here, YMMV) container needs the systemd-coredump user to have ownership of the dir you have mapped to /var/lib/mysql (or whatever your db backend equivalent is)
Then just make sure you switch over your trusted_domains and trusted_proxies, either using docker-compose env vars, or by editing /var/www/html/config/config.phpdirectly.
Based on Raphael PICCOLO's comments, I created a tarball of everything in the Volumes I was using for my original installation, created a new installation on my target machine, then extracted the tarball on the new machine. There is, however, one other step that must be taken if you do it this way: you must change the ownership of all the files in the tarball so that they are owned by the userID used by the new Nextcloud installation. Otherwise, the new Nextcloud applications will be unable to access any of the resources and attempts to even log in will get 500 Failures on a browser.
There is also a unique ID utilized by the MySQL container, so all the database- related data files must also undergo an ownership change.
Getting the correct userIDs is simple enough: when you first install the new Nextcloud and MySQL database, use the same volumes you had set up in the original docker-compose.yml file. Then, before untaring the data look at the userIDs of the files in the database folder and the Nextcloud folders. TThen when you put the contents of your tarball on the new installation, use chown -R to make the owership changes.
Note that I was transferring my installation from a Centos 7 machine running Docker with the traditional root user to a Centos 8 machoine running Docker in a "non- root user" mode. I do not know how permissions would be affected on other machines or modes.
Still, once the permissions were properly set up, everything works.

Avoiding a constant "stop, build, up" loop when developing with Docker locally

I've recently delved into the wonder that is Docker and have set up my containers using docker-compose (an Express app and a MySQL DB).
It's been great for sharing the project with the team and pushing to a VPS, but one thing that's fast becoming tedious is the need to stop the running app, docker-compose build then docker-compose up any time there are changes to the code (which I believe is also creating numerous unnecessary images?).
I've scoured about but haven't found a clear-cut way to get around this, barring ditching Docker-compose locally and using docker run to run the Express app pointing to a local DB (which would do away with a lot of the easy set up perks that come with Docker, such as building the DB from scratch).
Is there a Nodemon-style way of working with Docker (images/containers get updates automatically when code changes)? Is there something obvious I'm missing? Or is my approach the necessary "evil" that comes with working on a Dockerised app?
You can mount a volume to your source directory for local development. Any changes on the host will be reflected in the container. https://docs.docker.com/storage/volumes/
You might consider separate services for deployment/development. I usually have a separate service which mounts the source directory and installs test dependencies inside the container.

Docker: where should I put container configuration and war file

Recently I was trying to figure out how a docker workflow looks like.
What I thought is, devs should push images locally and in other environments servers should just directly pull that image and run it.
But I could see a lot of public images allows people to put configurations outside the container.
For example, in official elasticsearch image, there is a command as follows:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
So what is the point of putting configuration outside the container instead of running local containers quickly?
My argument is
if I put configuration inside a custom image, in testing environment or production, the server just need to pull that same image which is already built.
if I put configuration outside the image, in other environments, there will be another process to get that configuration from somewhere. Sure we could use git to source control that, but is this a tedious and useless effort to manage it? And installing third party libraries is also required.
Further question:
Should I put the application file (for example, war file) inside web server container or outside it?
When you are doing development, configuration files may change often; so rather than keep rebuilding the containers, you may use a volume instead.
If you are in production and need dozens or hundreds of the same container, all with slightly different configuration files, it is easy to have one single image and have diverse configuration files living outside (e.g. use consul, etcd, zookeeper, ... or VOLUME).

Is it a bad idea to use docker to run a front end build process during development?

I have an angular project I'm working on containerizing. It currently has enough build tooling that a front-end developer could (and this is how it currently works) just run gulp in the project root, edit source files in src/, and the build tooling handles running traceur, templates and libsass and so forth, spitting content into a build directory. That build directory is served with a minimal server in development, and handled by nginx in production.
My goal is to build a docker-based environment that replicates this workflow. My users are developers who are on different kinds of boxes, so having build dependencies frozen in a dockerfile seems to make sense.
I've gotten close enough to this- docker mounts the host volume, the developer edits the file on the local disk, and the gulp build watcher is running in the docker container instance and rebuilds the site (and triggers livereload, etc etc).
The issue I have is with wrapping my head around how filesystem layers work. Is this process of rebuilding files in the container's build/frontend directory generating a ton of extraneous saved layers? That's not something I'd really like, because I'm not wild about monotonically growing this instance as developers tweak and rebuild and tweak and rebuild. It'd only grow locally, but having to go through the "okay, time to clean up and start over" process seems tedious.
Is this process of rebuilding files in the container's build/frontend directory generating a ton of extraneous saved layers?
Nope, the only way to stack up extra layers is to commit a container with changes to a new image then use that new image to create the next container. Rinse, repeat.
Filesystem layers are saved when a container is committed to a new image (docker commit ...). When a container is running there will be a single read/write layer on top that contains all of the changes made to the container since it was created.
having to go through the "okay, time to clean up and start over" process seems tedious.
If you run the build container with docker run --rm ... then you'll get the cleanup for free. The build container will be created fresh from the image each time.
Also, data volumes bypass the union filesystem so there's a good chance you won't write to the container's filesystem at all.

Resources