creating a directory using docker swarm compose/yaml file - docker

Is there a way to create a directory on the local file system via yaml file if it does not exist?
I currently am mounting a dir from my local file sys inside the container and it works. But if the dir on the file system does not exist, container launch fails as the dir cannot be mounted. How can I make this as seamless as possible and embed the dir creation logic in the swarm yaml file?

As far as I know, docker-compose doesn't permit this, you probably have to do this by hand.
But you could also use an automation tool like puppet or ansible to handle such step to deploy your application and create the appropriate directories and set up your servers.
Here is how your tasks could look like in an ansible playbook to deploy a simple app and create a directory to mount your containers volumes on for instance :
- name: copy docker content
copy:
src: /path/to/app_src
dest: /path/to/app_on_server
- name: create directory for volume
file:
name: /path/to/mountpoint
state: directory
- name: start containers
shell: docker-compose up -d --build
args:
chdir: /path/to/app_on_server
(Note that this snippet is here to provide a general idea of the concept, you'd probably have to set up become directives, permissions, ownership, software installation and many other steps very specific to your application)

The cleanest way would be, that you get the Dockerfile for example from the official Nginx image and add an additional RUN mkdir /my/folder to it.
Afterwards you build your own Docker image for the Nginx via docker build .. Then you have a clean image which contains what you need based on the official source.

Related

How to get a Foundry VTT Docker container to load a nested bound volume inside the /data directory?

I would like to try my hand at modifying the One Ring 1e system On Foundry VTT https://gitlab.com/herve.darritchon/foundryvtt-tor1e.
In the Repo, it is suggested to use Docker for contributing and a docker-compose.yaml file is provided to simplify the deployment of Foundry VTT and jumpstart the development.
I never used Docker before but I think it is a great opportunity to try. After going through the getting started tutorial, I am convinced I know everything about it. I clone the repo, define the mandatory environment variables, deploy the container and finally, open the app in my browser.
I am amazed, it works... except for the fact that the One Ring system is nowhere to be found.
Here is what I think are the pieces of code relevant to my problem:
docker-compose.yaml
[...]
volumes:
- type: bind
source: ${FVTT_IN_DOCKER_DATA_DIRECTORY}
target: /data
- type: bind
source: ${FVTT_IN_DOCKER_TOR1E_DIRECTORY}
target: /data/data/systems/tor1e
[...]
Environment variables defined in the .env
FVTT_IN_DOCKER_DATA_DIRECTORY = ./data # where the data directory inside FVTT Container is mounted.
FVTT_IN_DOCKER_TOR1E_DIRECTORY = ../foundryvtt-tor1e-repo/src # the source directory of the TOR1E, used during dev mode.
[...]
If I check in the container docker exec -it <container-id> ls /data/data/systems/tor1e I can see the source folder of the repository is bound correctly.
If I copie the source of the repo in the /data/data/systems/tor1e folder of the filesystem and I restart the container, Foundry does find the system, and load the content of the /data/data/systems/tor1e folder of the filesystem while the content of /data/data/systems/tor1e in the container is still bound to '../foundryvtt-tor1e-repo/src' in the filesystem. Maddening!
The question is, what can I do so that the Foundry VTT app loads the content of the ../foundryvtt-tor1e-repo/src instead of what's in the /data/data/systems/tor1e folder?

Docker: copy file and directory with docker-compose

I created one Docker Image for a custom application which needs some license files (some files and one directory) to run, so I'm using the COPY command in the Dockerfile to copy license file to the image:
# Base image: Application installed on Debian 10 but unlicensed
FROM me/application-unlicensed
# Copy license files
COPY *.license /opt/application/
COPY application-license-dir /var/lib/application-license-dir
I use this Dockerfile to build a new image with the license for a single container. As I have 5 different licenses I created 5 different images each with one specific license file and directory.
The licenses are also fixed to a MAC Address, so when I run one of the five container I specify its own MAC Address with the --mac-address parameter:
docker run --rm --mac-address AB:CD:EF:12:34:56 me/application-license1
This work, but I wish to have a better and smarter way to manage this:
As with docker-compose is it possible to specify the container MAC Address, could I just use the unlicensed base image and copy license files and the license directory when I build the 5 containers with docker-compose?
Edit: let me better explain the structure of license files
The application is deployed into /opt/application directory into the Docker image.
License files (*.license) are into the /opt/application at the same level of the application itself, to they cannot be saved into a Docker volume unless I create some symlinks (but I have to check if the application will work this way).
The directory application-license-dir needs to be at /var/lib/application-license-dir, so it can be mounted into a Docker volume (I have to check if the application will work but I think so).
Both *.license files and the files into the application-license-dir are binary, so I cannot script or create them at runtime.
So:
can docker-compose create a local directory on the Docker host
server before binding and mounting it to a Docker volume?
can docker-compose copy my licenses file and my license directory from
the GIT repository (locally cloned) to the local directory created
during the step 1?
can docker-compose create some symlinks into the
container's /opt/application directory for the *.license files
stored into the volume?
For things that are different every time you run a container or when you run a container on a different system, you generally don't want to specify these in a Dockerfile. This includes the license files you show above; things like user IDs also match this pattern; depending on how fixed your configuration files are they can also count. (For things that are the same every time you run the container, you do want these in your image; especially this is the application source code.)
You can use a Docker bind mount to inject files into a container at run time. There is Compose syntax for bind mounts using the volumes: directive.
This would give you a Compose file roughly like:
version: '3'
services:
app1:
image: me/application-unlicensed
volumes:
- './app1.license:/opt/application/app.license'
- './application-license-dir:/var/lib/application-license-dir'
mac_address: 'AB:CD:EF:12:34:56'
Bind mounts like this are a good match for pushing configuration files into containers. They can provide an empty host directory into which log files can be written, but aren't otherwise a mechanism for copying data out of an image. They're also useful as a place to store data that needs to outlive a container, if your application can't store all of its state in an external database.
According to this commit docker-compose has mac_address support.
Mounting license files with -v could be an option.
You can set mac_address for the different containers as
mac_address: AB:CD:EF:12:34:12. For documentation reference see this
For creating multiple instances from the same image, you will have to copy paste each app block 5 times in your docker-compose file and in each you can set a different mac_adddress

Working directory when running a Docker container with a Docker-Compose file

I'm trying to set up a CI pipeline which uses a Docker container to run tests. The pipeline is supposed to create a container based on an image I already have and remove that container when it's finished.
For my tests I need to mount a few volumes and bind a few ports from my runner to my container, so to simplify things I want to use a docker-compose file that's constantly stored in /home/runner/docker/docker-compose.yml on my runner.
The problem is as follows:
in my docker-compose.yml I have the following lines, binding the current working directory to the HTML folder in my container :
volumes:
- .:var/www/html
When I use the command docker-compose -f "/home/runner/docker/docker-compose.yml" -d, . should be whichever folder GitLab CI cloned my project to, not /home/runner/dockeras is currently the case.
Is there a way to make it so that . is my cloned project folder (without hardcoding the name), or am I better off just executing a docker run in my GitLab CI script?
One option could be to use an environment variable to define the path to the repo, so that instead of
volumes:
- .:var/www/html
you have
volumes:
- ${YOUR_REPO}:var/www/html
This way you only need to set YOUR_REPO before running docker-compose and that's it.

Recompiling VueJS app before docker-compose up

I want to deploy vueJS app inside a docker nginx container but before that container runs the vueJS source has to be compiled via npm run build I want to compilation to run in a container and then exit leaving only the compiled result for the nginx container.
Every time docker-compose up is run the vueJS app has to be recompiled as there is a .env file on the host OS that has to be volume mounted and the variables in here could be updated.
The ideal way I think would be some way of creating stages for docker compose like in gitlab ci so there would be a build stage and when that's finished the nginx container starts. But when I looked this up I couldn't see a way to do this.
What would be the best way to compile my vueJS app every time docker-compose up is run?
If you're already building your Vue.js app into a container (with a Dockerfile), you can make use of the build directive in your docker-compose.yml file. That way, you can use docker-compose build to create containers manually, or use run --build to build containers before they launch.
For example, this Compose file defines a service using a container build file, instead of a prebuilt image:
version: '3'
services:
vueapp:
build: ./my_app # There should be a Dockerfile in this directory
That means I can both build containers and run services separately:
docker-compose build
docker-compose up
Or, I can use the build-before-run option:
# Build containers, and recreate if necessary (build cache will be used)
docker-compose up --build
If your .env file changes (and containers don't pick up changes on restart), you might consider defining them in container build file. Otherwise, consider putting the .env file into a directory (and mount the directory, not the file, because some editors will use a swap file and change the inode - and this breaks the mount). If you mount a directory and change files within the directory, the changes will reflect in the container, because the parent directory's inode didn't change.
I ended up having an nginx container that reads the files from a volume mount and a container that builds the app and places the files in the same volume mount. While the app is compiling, nginx reads the old version and when the compilation is finished the files get replaced with the new ones.

Wanted persistent data with Docker volumes but have empty directories instead

I'm writing a Dockerfile for a java application but I'm struggling with volumes: the mounted volumes are empty.
I've read the Dockerfile reference guide and the best pratice to write Dockerfiles, but, for a start, my example is quite complicated.
What I want to do is to be able to have the following items on the host (in a mounted volume):
configuration folder,
log folder,
data folder,
properties files
Let me summarize :
When the application is installed (extracted from the tar.gz with the RUN command), it writes a bunch of files and directories (including log and conf).
When the application is started (with CMD or ENTRYPOINT), it creates a data folder if it doesn't exist and put data files in it.
I'm only interested in:
/rootapplicationfolder/conf_folder
/rootapplicationfolder/log_folder
/rootapplicationfolder/data_folder
/rootapplicationfolder/properties_files
I'm not interested in /rootapplicationfolder/binary_files
There is something taht I dont't see. I've read and applied the information found in the two following links without success.
Questions:
Should I 'mkdir'only the top level dir on the host to be mapped with /rootapplicationfolder ?What about the files ?
Is the order of 'VOLUME' in my Dockerfile important ?
Does it need to be placed before or after the deflating (RUN tar zxvf compressed_application) ?
https://groups.google.com/forum/#!topic/docker-user/T84nlzw_vpI
Docker on Linux - Empty mounted volumes
Try using Docker-compose, use the volumes property to set what path you want to mount between your machine and your container.
version 2 Example
web:
image: my-web-app
build:.
command: bash -c "npm start"
ports:
- "8888:8888"
volumes:
- .:/home/app/code (This mount your current path with /home/app/code)
- /home/app/code/node_modules/ (unmount sub directory)
environment:
NODE_ENV: development
You can look at this repository too.
https://github.com/williamcabrera4/docker-flask/blob/master/docker-compose.yml
Well, I've managed to get want I want.
First, I haven't ant VOLUME directive in my Dockerfile.
All the shared directories are created with the -v option of the docker run command.
After that, I had issues with extracting the archive whenever the "extracting" would overwrite an existing directory mounted with -v because it's simply not possible.
So, I deflate the archive somewhere where the -v mounted volumes don't exist and AFTER this step, I mv the contents of deflated/somedirectory to -vMountedSomeDirectory.
I still had issues with docker on CentOS, the mv would copy the files to the destination but would be unable to delete them at the source after the move. I got tired and simply use a debian distribution instead.

Resources