application configuration as a docker image - docker

I'm trying to separate my application configuration file out from the application itself so that I can have a single docker image for application code and multiple images for storing different configurations.
I understand that you can use --volume-from to attach a data only container to the application container. But what I'd like to achieve is to have an image with all the configuration files and the application container can attach to a volume container from that image. I'm not sure if this is possible?
As far as I can see, there seems to be no way of running a data only container from a image. Data only container are normally generated with docker create -v. Hence this question.

You can run data only container from a custom image having your config files.
for e.g.
docker create -v /volumeexposedinyourimage --name datacontainer yourimagename /bin/true
yourimagename - is the name of your custom image (which has your config files)
and volumeexposedinyourimage is the volume exposing your config files.
docker-compose example
version: '2'
services:
datacontainer :
image: yourimagename
volumes:
- /volumeexposedinyourimage
command: ["/bin/true"]
appcontainer:
image: appimage
volumes_from:
- datacontainer
or you could mount host directory containing config files as volume
e.g.
docker run -d -P --name appcontainer -v /host/config:/path appimage
place your config files on /host/config on the host and in your appcontainer , you could refer to your config files using /path

Related

How to update configuration files in Docker-compose volumes?

I'm running a docker-compose setup, and when I want to update files in my image I create a new docker image. Though the problem is; the file I'm editing is located in the persistent volume, meaning the Docker image itself will get the changes, but since I'm not deleting docker-compose volumes the volume will be used by the new image, hence the old file will be used by new image.
Running docker-compose down -v is not an options because I want to keep other existing files in the volume (logs etc.).
I want to know if it possible to do this without too much hacks, since I'm looking to automate this.
Example docker-compose.yml
version: '3.3'
services:
myService:
image: myImage
container_name: myContainer
volumes:
- data_volume:/var/data
volumes:
data_volume
NOTE: The process of doing change in my case:
docker-compose down
docker build -t myImage:t1 .
docker compose up -d
You could start a container, mount the volume and execute a command to delete single files. Something like
docker run -d --rm -v data_volume:/var/data myImage rm /var/data/[file to delete]

How is the Docker Mount Point decided?

I pull an image from Docker Hub (say) Ghost CMS and after reading the documentation, I see that the default mount point is /var/lib/ghost/content
Now, when I make my own application with Ghost as the base image, I map some folder (say) CMS-Content and mount it on /var/lib/ghost/content written like this -
volumes:
- CMS-Content: /var/lib/ghost/content
The path /var/lib/ghost/content are System Level paths. However, CMS-Content is a folder I created to host my files (persistent data).
Finally, I decide to publish my application as an image in Docker Hub, so what will be the mount point now?
If you want to make a pesistent data for the container :
Using command-line :
docker run -it --name <WHATEVER> -p <LOCAL_PORT>:<CONTAINER_PORT> -v <LOCAL_PATH>:<CONTAINER_PATH> -d <IMAGE>:<TAG>
Using docker-compose.yaml :
version: '2'
services:
cms:
image: <IMAGE>:<TAG>
ports:
- <LOCAL_PORT>:<CONTAINER_PORT>
volumes:
- <LOCAL_PATH>:<CONTAINER_PATH>
Assume :
IMAGE: ghost-cms
TAG: latest
LOCAL_PORT: 8080
CONTAINER_PORT: 8080
LOCAL_PATH: /persistent-volume
CONTAINER_PATH: /var/lib/ghost/content
Examples :
First create /persistent-volume.
$ mkdir -p /persistent-volume
docker-compose -f docker-compose.yaml up -d
version: '2'
services:
cms:
image: ghost-cms:latest
ports:
- 8080:8080
volumes:
- /persistent-volume:/var/lib/ghost/content
Each container has its own isolated filesystem. Whoever writes an image's Dockerfile gets to decide what filesystem paths it uses, but since it's isolated from other containers and the host it's very normal to use "system" paths. As another example, the standard Docker Hub database images use /var/lib/mysql and /var/lib/postgresql/data. For a custom application you might choose to use /app/data or even just /data, if that makes sense for you.
If you're creating an image FROM a pre-existing image like this, you'll usually inherit its filesystem layout, so the mount point for your custom image would be the same as in the base image.
Flipping through the Ghost Tutorials, it looks like most things you could want to do either involve using the admin UI or making manual changes in the content directory. The only files that changes are in the CMS-Content named volume in your example (and even if you didn't name a volume, the Docker Hub ghost image specifies an anonymous volume there). That means you can't create a derived image with a standard theme or other similar setup: you can't change the content directory in a derived image, and if you experiment with docker commit (not recommended) the image it produces won't have the content from the volume.

Moving volume between containers docker-composer

I have image A (some_laravel_project) and B (laravel_module). Image A is a Laravel project that looks like this.
app
modules
core
Volume Image b here
config
As the list above suggests I want to share a volume from Image B in Image A using docker-compose. I want to access the files in container B.
This is the docker-compose I tried and didn't receive any errors creating those images in gitlab ci. I checked and the volume and its files are in stored in the module_user:latest container.
I think I made a mistake mounting the volume to some_laravel_project.
version: '3'
services:
laravel:
image: some_laravel_project
working_dir: /var/www
volumes:
- /var/www/storage
- userdata:/var/www/Modules
user:
image: laravel_module
volumes:
- userdata:/user
volumes:
userdata:
webroot:
The method you used to share volumes across container in docker compose is the correct one. You can find this documented under docker-compose volumes
if you want to reuse a volume across multiple services, then define a
named volume in the top-level volumes key. Use named volumes with
services,
In you case, the directory /var/www/Modules in laravel will have the same content as that in /user inside user service. You can verify that by going into the containers and checking each directoty by running;
docker exec -it <container-name> bash

How to add files in docker container and make them accessible from other containers?

Short version:
I want to add files in a docker container in docker-compose or Dockerfile and I want to make it accessible from other containers that I made in docker-compose file. How can I do that?
Long version:
I have a Python app in a container that uses a .csv file to generate a POJO machine learning model.
I also have a Java app in a container that uses the POJO machine learning model and appends the .csv file. The java app has a fileWatcher() method implemented.
The containers are made from the docker-compose file that calls Dockerfiles for each one of them. So I want to add them this way and not with CMD docker commands.
You can add the same named volume to different containers:
docker volume create --name volume_data
docker run -t -i -v volume_data:/public debian:jessie /bin/bash
docker run -t -i -v volume_data:/public2 debian:jessie /bin/bash
or as docker-compose.yml
services:
assets:
image: any_asset_image
volumes:
- assets:"/public/assets"
proxy:
image: nginx
volumes:
- assets
volumes:
- assets

How to sync code between container and host using docker-compose?

Until now, I have used a local LAMP stack to develop my web projects and deploy them manually to the server. For the next project I want to use docker and docker-compose to create a mariaDB, NGINX and a project container for easy developing and deploying.
When developing I want my code directory on the host machine to be synchronised with the docker container. I know that could be achieved by running
docker run -dt --name containerName -v /path/on/host:/path/in/container
in the cli as stated here, but I want to do that within a docker-compose v2 file.
I am as far as having a docker-composer.yml file looking like this:
version: '2'
services:
db:
#[...]
myProj:
build: ./myProj
image: myProj
depends_on:
- db
volumes:
myCodeVolume:/var/www
volumes:
myCodeVolume:
How can I synchronise my /var/www directory in the container with my host machine (Ubuntu desktop, macos or Windows machine)?
Thank you for your help.
It is pretty much the same way, you do the host:container mapping directly under the services.myProj.volumes key in your compose file:
version: '2'
services:
...
myProj:
...
volumes:
/path/to/file/on/host:/var/www
Note that the top-level volumes key is removed.
This file could be translated into:
docker create --links db -v /path/to/file/on/host:/var/www myProj
When docker-compose finds the top-level volumes section it tries to docker volume create the keys under it first before creating any other container. Those volumes could be then used to hold the data you want to be persistent across containers.
So, if I take your file for an example, it would translate into something like this:
docker volume create myCodeVolume
docker create --links db -v myCodeVoume:/var/www myProj

Resources