Docker Merge Volumes between host and image - docker

I have an application, in which I want to install into a docker image. This particular application has a folder for custom user's plugins. A user can put their plugins for our application there and we will load and execute them. We also ship our application with some plugins already. What I wanted is when I run docker mounting a volume with the -v options it still keeps the contents already in the image in a way like the contents from the image is merged with the ones in the host folder. Is that possible? Is there another solution that not involves a refactor in the app to support loading from multiple folders to achieve that?

You can mount them into your /plugins/customplugin1. In that case ls plugins should show
customplugin1
standardplugin
standardplugin2

Related

shared folder for a docker container

It is certainly a basic question but I don't know how to deal with this issue.
I am creating a simple docker image executing Python scripts and which be deployed on differents users's Windows laptops. It needs specific shared folder in order to write outputs at the end of the process.
Users are not able to manage any informatic stuff like docker or simple terminal.
So they run it with a bat file where I indicate the docker command with -v option.
But obviously users path are differents on each laptop. How to create a standard image which could avoid this specific mounting path
thanks a lot

Docker Backup Concept. A Beginner Question

Be there a machine that runs various docker projects. Each docker container is regularly replaced/stopped/started as soon as newer versions arrive from the build system.
How does a backup concept for such a machine look like?
Looking into similar questions [1] the correct path to a working backup/restore procedure is not immediately clear to me. My current understanding is something like:
Backup
Use scripts to create images and containers. Store/Backup scripts in your favorite Version Control System. Use version tags to pull docker images. Don't use latest tag.
Exclude /var/lib/docker/overlay2 from backup (to prevent backing up dangling and temporary stuff)
Use named volumes only. Volumes can be saved and restored from backup. For database stuff extra work has to be done. Eventually consider to tar volumes to extra folder [2].
docker prune daily to remove dangling stuff
Restore
Make sure all named volumes are back in place.
Fetch scripts from version control to recreate images as needed. Use docker run to recreate containers.
Application specific tasks - restore databases from dumps , etc.
[1]
How can I backup a Docker-container with its data-volumes?
[2] https://stackoverflow.com/a/48112996/1485527
Don't use latest tag in your images. Set correct tags (like v0.0.1, v0.0.2, etc) for your images and you can have all of your versions in a docker registry.
You should prefer to use stateless container
What is about docker volume? You can use it https://docs.docker.com/storage/volumes/
If you use bind mount volume you can manually save you files in archive for backup

Override a volume when Building docker image from another docker image

sorry if the question is basic but would it be possible to build a docker image from another one with a different volume in the new image? My use case is the following:
Start From image library/odoo (cfr. https://hub.docker.com/_/odoo/)
upload folders into the volume "/mnt/extra-addons"
build a new image, tag it then put it in our internal image repo
how can we achieve that? I would like to avoid putting extra folders into the host filesystem
thanks a lot
This approach seems to work best until the Docker development team adds the capability you are looking for.
Dockerfile
FROM percona:5.7.24 as dbdata
MAINTAINER monkey#blackmirror.org
FROM centos:7
USER root
COPY --from=dbdata / /
Do whatever you want . This eliminates the VOLUME issue. Heck maybe I'll write tool to automatically do this :)
You have a few options, without involving the host OS that runs the container.
Make your own Dockerfile, inherit from the library/odoo Docker image using a FROM instruction, and COPY files into the /mnt/extra-addons directory. This still involves your host OS somewhat, but may be acceptable since you wouldn't necessarily be building the Docker image on the same host you were running it.
Make your own Dockerfile, as in (1), but use an entrypoint script to download the contents of /mnt/extra-addons at runtime. This would increase your container startup time since the download would need to take place before running your service, but no host directories would need be involved.
Personally I would opt for (1) if your build pipeline supports it. That would bake the addons right into the image, so the image itself would be a complete, ready-to-go build artifact.

Sharing bind volume in Docker swarm

We use open-jdk image to deploy our jars. since we have multiple jars we simply attach them using bind mode and run them. I don't want to build separate images since our deployment will be in air gaped environments and each time I can't rebuild images as only the jars will be changing.
Now we are trying to move towards swarm. Since it is a bind mount, I'm unable to spread the replicas to other nodes.
If I use volumes how can I put these jars into that volume? One possibility is that I can run a dummy alpine image and mount the volume to host and then I can share it with other containers. But it possible to share that volume between the nodes? and is it an optimum solution? Also if I need to update the jars how can that be done?
I can create NFS drive but I'm trying to figure out a way of implementing without it. Since it is an isolated environment and may contain crucial data I can't use 3rd party plugins to finish the job as well.
So how docker swarm can be implemented in this scenario?
Use docker build. Really.
An image is supposed to be a static copy of your application and its runtime, and not the associated data. The statement "only the jars changed" means "we rebuilt the application". While you can use bind mounts to inject an application into a runtime-only container, I don't feel like it's really a best practice, and that's doubly true in a language where there's already a significant compile-time step.
If you're in an air-gapped environment, you need to figure out how you're going to provide application updates (regardless of the deployment framework). The best solution, if you can manage it, is to set up a private Docker registry on the isolated network, docker save your images (with the tars embedded), then docker load, docker tag, and docker push them into the registry. Then you can use the registry-tagged image name everywhere and not need to worry about manually pushing the images and/or jar files across.
Otherwise you need to manually distribute the image tar and docker load it, or manually push your updated jars on to each of the target systems. An automation system like Ansible works well for this; I'm partial to Ansible because it doesn't require a central server.

Build multiple images vs start multiple containers

What's the best practices?
Build different docker images for each application instance. For example, each application instance has its own code directory. Use ADD to build different images
Build a base image. Start a new container for each application instance. Use option -v to bind specific volume for each application instance.
Reasons go for multiple containers:
Using ADD in Dockerfile means you need rebuild your image once any changes in that directory.
not sure if best practice but i'd go for the -v option just for the numbers: you have to create a container for every code directory anyway, i would avoid having to build the same number of images too.
Moreover disk space may be an issue but im not sure
one image and one container for every code directory < one image and multiple associated container with different mount point for every code directory

Resources