I want to deploy vueJS app inside a docker nginx container but before that container runs the vueJS source has to be compiled via npm run build I want to compilation to run in a container and then exit leaving only the compiled result for the nginx container.
Every time docker-compose up is run the vueJS app has to be recompiled as there is a .env file on the host OS that has to be volume mounted and the variables in here could be updated.
The ideal way I think would be some way of creating stages for docker compose like in gitlab ci so there would be a build stage and when that's finished the nginx container starts. But when I looked this up I couldn't see a way to do this.
What would be the best way to compile my vueJS app every time docker-compose up is run?
If you're already building your Vue.js app into a container (with a Dockerfile), you can make use of the build directive in your docker-compose.yml file. That way, you can use docker-compose build to create containers manually, or use run --build to build containers before they launch.
For example, this Compose file defines a service using a container build file, instead of a prebuilt image:
version: '3'
services:
vueapp:
build: ./my_app # There should be a Dockerfile in this directory
That means I can both build containers and run services separately:
docker-compose build
docker-compose up
Or, I can use the build-before-run option:
# Build containers, and recreate if necessary (build cache will be used)
docker-compose up --build
If your .env file changes (and containers don't pick up changes on restart), you might consider defining them in container build file. Otherwise, consider putting the .env file into a directory (and mount the directory, not the file, because some editors will use a swap file and change the inode - and this breaks the mount). If you mount a directory and change files within the directory, the changes will reflect in the container, because the parent directory's inode didn't change.
I ended up having an nginx container that reads the files from a volume mount and a container that builds the app and places the files in the same volume mount. While the app is compiling, nginx reads the old version and when the compilation is finished the files get replaced with the new ones.
Related
I created one Docker Image for a custom application which needs some license files (some files and one directory) to run, so I'm using the COPY command in the Dockerfile to copy license file to the image:
# Base image: Application installed on Debian 10 but unlicensed
FROM me/application-unlicensed
# Copy license files
COPY *.license /opt/application/
COPY application-license-dir /var/lib/application-license-dir
I use this Dockerfile to build a new image with the license for a single container. As I have 5 different licenses I created 5 different images each with one specific license file and directory.
The licenses are also fixed to a MAC Address, so when I run one of the five container I specify its own MAC Address with the --mac-address parameter:
docker run --rm --mac-address AB:CD:EF:12:34:56 me/application-license1
This work, but I wish to have a better and smarter way to manage this:
As with docker-compose is it possible to specify the container MAC Address, could I just use the unlicensed base image and copy license files and the license directory when I build the 5 containers with docker-compose?
Edit: let me better explain the structure of license files
The application is deployed into /opt/application directory into the Docker image.
License files (*.license) are into the /opt/application at the same level of the application itself, to they cannot be saved into a Docker volume unless I create some symlinks (but I have to check if the application will work this way).
The directory application-license-dir needs to be at /var/lib/application-license-dir, so it can be mounted into a Docker volume (I have to check if the application will work but I think so).
Both *.license files and the files into the application-license-dir are binary, so I cannot script or create them at runtime.
So:
can docker-compose create a local directory on the Docker host
server before binding and mounting it to a Docker volume?
can docker-compose copy my licenses file and my license directory from
the GIT repository (locally cloned) to the local directory created
during the step 1?
can docker-compose create some symlinks into the
container's /opt/application directory for the *.license files
stored into the volume?
For things that are different every time you run a container or when you run a container on a different system, you generally don't want to specify these in a Dockerfile. This includes the license files you show above; things like user IDs also match this pattern; depending on how fixed your configuration files are they can also count. (For things that are the same every time you run the container, you do want these in your image; especially this is the application source code.)
You can use a Docker bind mount to inject files into a container at run time. There is Compose syntax for bind mounts using the volumes: directive.
This would give you a Compose file roughly like:
version: '3'
services:
app1:
image: me/application-unlicensed
volumes:
- './app1.license:/opt/application/app.license'
- './application-license-dir:/var/lib/application-license-dir'
mac_address: 'AB:CD:EF:12:34:56'
Bind mounts like this are a good match for pushing configuration files into containers. They can provide an empty host directory into which log files can be written, but aren't otherwise a mechanism for copying data out of an image. They're also useful as a place to store data that needs to outlive a container, if your application can't store all of its state in an external database.
According to this commit docker-compose has mac_address support.
Mounting license files with -v could be an option.
You can set mac_address for the different containers as
mac_address: AB:CD:EF:12:34:12. For documentation reference see this
For creating multiple instances from the same image, you will have to copy paste each app block 5 times in your docker-compose file and in each you can set a different mac_adddress
I'm trying to set up a CI pipeline which uses a Docker container to run tests. The pipeline is supposed to create a container based on an image I already have and remove that container when it's finished.
For my tests I need to mount a few volumes and bind a few ports from my runner to my container, so to simplify things I want to use a docker-compose file that's constantly stored in /home/runner/docker/docker-compose.yml on my runner.
The problem is as follows:
in my docker-compose.yml I have the following lines, binding the current working directory to the HTML folder in my container :
volumes:
- .:var/www/html
When I use the command docker-compose -f "/home/runner/docker/docker-compose.yml" -d, . should be whichever folder GitLab CI cloned my project to, not /home/runner/dockeras is currently the case.
Is there a way to make it so that . is my cloned project folder (without hardcoding the name), or am I better off just executing a docker run in my GitLab CI script?
One option could be to use an environment variable to define the path to the repo, so that instead of
volumes:
- .:var/www/html
you have
volumes:
- ${YOUR_REPO}:var/www/html
This way you only need to set YOUR_REPO before running docker-compose and that's it.
My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.
I am using docker-compose to start few machines with different services.
During the build phase docker-compose creates few files I would like to work on; at the moment I manually copy those files to a directory I share with the host after the build.
The directory contents are shared via volumes.
volumes:
- "./prg:/prg"
Perhaps I am missing the obvious, but there is a way to populate the shared directory content on the build phase so when I do docker-compose up the first time I can start editing the files in prg?
Here is my problem:
I have a container A (Node.js) and a container B (nginx). In the Dockerfile of container A, I build several files from the sources, as they are needed to run the server into a folder named build. I want to access this folder from container B to serve the static files.
The purpose is to have a simple workflow were you could just git clone the repo with the sources and run docker-compose up --build and everything is running. In this scenario, the host does not have the software needed to build the file, so the build must happen INSIDE the docker container.
My first attempt that almost work was the following:
version: "2"
services:
nginx:
volumes_from:
- node
node:
volumes:
- /code/build
When I first built docker compose build & up everything seemed to work fine, the container is created from the container A with the build files inside it and the container B can access them as expected.
However, the issue happens when the sources are updated. When it happens, the new build files do not replace the old one inside the container because the existing container seems to have the priority. So after the first time I always have old files for both container A and B.
I investigated a way to force the volume to be recreated from scratch everytime I run docker-compose build but did not find anything. The only thing I found would be to use docker-compose stop && docker-compose rm but it seems to be a bit hacky to do that everytime and in addition it leads to a quite long downtime compared to just replace existing container with new version with docker-compose up.
Is there any proper solution to acomplish what I am trying to achieve?
I'd redo the workflow, use a named volume that's mounted in multiple containers, and one of those containers is an updater that has the application build environment. Then on launch, the updater pulls the latest from git and updates the shared volume as part of its CMD or ENTRYPOINT.
Your compose file would look similar to:
version: "2"
volumes:
build:
driver: local
services:
nginx:
volumes:
- build:/code/build
updater:
volumes:
- build:/code/build
Then on any changes, you can run a docker-compose run updater and it will push the latest changes to your volume where nginx can use it without ever stopping your other containers. Since it's a batch job that exits, even a docker-compose up would launch the updater again.