Deploy and customize a VueJs app using Docker and GitlabCI - docker

i made a VueJS app, basically a website administration app that allows to display / edit data using different APIs.
This app is customizable using env variables (VUE_APP_XXX) : the urls of the APIs, the title, the color theme etc... About 30 variables, but i will certainly add other ones in the future.
For now i deploy my app using Gitlab CI, i have this Dockerfile (i remove most of env variables for clarification) :
# build
FROM node:lts-alpine as build-stage
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
COPY . .
ARG VUE_APP_API_URL
ARG VUE_APP_DATA_URL
ENV VUE_APP_API_URL $VUE_APP_API_URL
ENV VUE_APP_DATA_URL $VUE_APP_DATA_URL
RUN npm run build
# production
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and the gitlab-ci.yml :
docker-build:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- |
docker build --pull
--build-arg VUE_APP_API_URL="$CI_TEST_API_URL"
--build-arg VUE_APP_DATA_URL="$CI_TEST_DATA_URL"
-t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
only:
- dev
when: manual
Then on my server i just make a docker run... to launch my app. Everything works fine, except that :
i have to specify all the variables manually in the Dockerfile and gitlab-ci.yml
the resulting docker image could contain "sensible" data such as logins and passwords
i have to create one image per instance of my app
For legal reasons i need to create one repository per app, and one repository per website (beacause each of these could have a different owner).
So my question is : what would be the best approach to deploy many instances of this app, for many websites ? Knowing that each website needs its own admin app, that can be hosted on its own server.
I'm quite a newbie to docker, i was thinking of :
creating a docker image for a generic app, with default env variables (maybe just with the code, without the npm run build step ?)
using this docker image to create a container for each instance, using its own env variables (is it possible to build the app with docker compose ?)
I'm quite confused about where/when to build the VueJs app, and what tools to use for that.
Any help appreciated, thanks !
Update :
following taleodor suggestion, i found this medium post that seems to do the job. I found it a little 'tricky' but it works :)

When you have a docker image and then deploy it, every tool has a way to override Environment variables baked into the image itself.
So, for your example, with plain docker you could do
docker run -e VUE_APP_API_URL=http://my.url ...
And this would essentially override VUE_APP_API_URL variable previously set on the container. Again, any orchestration platform (docker compose, swarm, kubernetes etc) has capability to override environment variables as this is a very common scenario.
So usual approach I use - pre-fill environment variables inside the image with most generic values and then override them on actual deployments - then you don't need many images (many images for the same thing would be an anti-pattern).
If you have more questions - feel free to join our community discord and you can ask me there - https://discord.gg/UTxjBf9juQ
Update (following comment below): For the static images with compiled UI code, the algorithm is to run some entrypoint script that would do substitution over compiled files. Sample workable pattern is following (you might need tweaks to get it done right, but this gives the idea):
Assuming you're using nginx as a base image, create entrypoint.sh file that would look something like (see this SO for some references: Can envsubst not do in-place substitution?):
#!/bin/sh
find /usr/share/nginx/html/*.js -type f -exec sh -c "cat {} | envsubst | tee {}.tmp && mv {}.tmp {}" \;
nginx -g 'daemon off;'
Copy this entrypoint.sh file into your docker image and use it with ENTRYPOINT instruction.

Related

Best way to update config file in Docker with environment variables

im unable to find an easy solution, but probably i'm just searching for the wrong things:
I have a docker-compose.yml which contains a tomcat that is built by the contents of the /tomcat folder. In /tomcat there is a Dockerfile, a .war and a server.xml.
The Dockerfile is based on tomcat:9, and copys the server.xml and .war files into the right directories.
If I do docker-compose up, everything is running fine. But i would love to find a way to update the connectors within the server.xml, without pruning the image, adjusting the server.xml and start it again.
It would be perfect to put a $CONNECTOR_CONFIG in the server.xml, and provide an variables.env to docker-compose where the $CONNECTOR_CONFIG variable is set to like ""
I know i could adjust the server.xml within the Dockerfile with sed, but this way the image must be pruned everytime i want to change something right?
Is there a way that i can later just edit the variables.env and docker-compose down/up?
Regards,
EdFred
A useful pattern here is to use the image's ENTRYPOINT as a wrapper script that does first-time setup. If that script ends with exec "$#" then it will execute the image's CMD as normal. You can use this to do things like rewrite configuration files based on environment variables.
#!/bin/sh
# docker-entrypoint.sh
# Replace any environment variable references in server.xml.tmpl.
# (Assumes the image has the full GNU tool set.)
envsubst <"$CATALINA_BASE/conf/server.xml.tmpl" >"$CATALINA_BASE/conf/server.xml"
# Run the standard container command.
exec "$#"
Normally in a tomcat image you wouldn't include a CMD since the base image knows how to start Tomcat. The Docker Hub tomcat image page has a mention of it, or you can click through to find the original Dockerfile. You need to know this since specifying an ENTRYPOINT in a derived Dockerfile will reset the CMD.
Your Dockerfile then needs to COPY this script in and set up the ENTRYPOINT and CMD.
# Dockerfile
FROM tomcat:9
COPY myapp.war /usr/local/tomcat/webapps/
COPY server.xml.tmpl /usr/local/tomcat/conf/
COPY docker-entrypoint.sh /usr/local/tomcat/bin/
# ENTRYPOINT _MUST_ be JSON-array form
ENTRYPOINT ["docker-entrypoint.sh"]
# Duplicate from base image
CMD ["catalina.sh", "run"]
You can verify this by hand using a docker run command. Any command you specify after the image name gets run instead of the CMD; but the main container command is still constructed by passing that command as arguments to the alternate ENTRYPOINT and so your wrapper script will run.
docker run --rm \
-e CONNECTOR_CONFIG=test-connector-config \
my-image \
cat /usr/local/tomcat/conf/server.xml
In your final Compose setup, you can include the configuration as an environment: variable.
version: '3.8'
services:
myapp:
build: .
ports: ['8080:8080']
environment:
CONNECTOR_CONFIG: ...
envsubst is a GNU tool that replaces $ENVIRONMENT_VARIABLE references in text files. It's very useful for this specific case, but you can do the same work with sed or another text-processing tool, especially if you don't have the GNU tools available (in particular if you have an Alpine-based image).

How to use poetry file to build docker image?

I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true

What is the purpose of building a docker image inside a compose file?

I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.

Recommended way to handle empty vs existing DB in Dockerfile

I want to run M/Monit (https://mmonit.com/) in a docker container and found this Dockerfile: https://github.com/mlebee/docker-mmonit/blob/master/Dockerfile
I'm using it with a simple docker-compose.yml in my test environment:
version: '3'
services:
mmonit:
build: .
ports:
- "8080:8080"
#volumes:
#- ./db/:/opt/mmonit/db/
It does work, but I want to extend the Dockerfile so that the path /opt/mmonit/db/ is exported as a volume. I'm struggling to implement the following behaviour:
When the volume mapped to /opt/mmonit/db/ is empty (for example on first setup) the files from the install archive should be written to the volume. The db folder is part of the archive.
When the database file /opt/mmonit/db/mmonit.db already exists in the volume, it should not be overwritten in any circumstances.
I do have an idea how to script the required operations / checks in bash, but I'm not even sure if it would be better to replace the ENTRYPOINT with a custom start script or if it should be done by modifying the Dockerfile only.
That's why I ask for the recommended way.
In general the strategy you lay out is the correct path; it's essentially what the standard Docker Hub database images do.
The image you link to is a community image, so you shouldn't feel particularly bound to that image's decisions. Given the lack of any sort of license file in the GitHub repository you may not be able to copy it as-is, but it's also not especially complex.
Docker supports two "halves" of the command to run, the ENTRYPOINT and CMD. CMD is easy to provide on the Docker command line, and if you have both, Docker combines them together into a single command. So a very typical pattern is to put the actual command to run (mmmonit -i) as the CMD, and have the ENTRYPOINT be a wrapper script that does the required setup and then exec "$#".
#!/bin/sh
# I am the Docker entrypoint script
# Create the database, but only if it does not already exist:
if ! test -f /opt/mmonit/db/mmonit.db; then
cp -a /opt/monnit/db_base /opt/monnit/db
fi
# Replace this script with the CMD
exec "$#"
In your Dockerfile, then, you'd specify both the CMD and ENTRYPOINT:
# ... do all of the installation ...
# Make a backup copy of the preinstalled data
RUN cp -a db db_base
# Install the custom entrypoint script
COPY entrypoint.sh /opt/monit/bin
RUN chmod +x entrypoint.sh
# Standard runtime metadata
USER monit
EXPOSE 8080
# Important: this must use JSON-array syntax
ENTRYPOINT ["/opt/monit/bin/entrypoint.sh"]
# Can be either JSON-array or bare-string syntax
CMD /opt/monit/bin/mmonit -i
I would definitely make these kind of changes in a Dockerfile, either starting FROM that community image or building your own.

How to have two images in a Dockerfile without using Docker Compose?

How to have two images in the dockerfile and that are linked?
I don't want use docker compose, I want something like this
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm i
COPY . /usr/src/app
EXPOSE 4000
CMD [ "npm", "start" ]
FROM mongo:latest as mongo
WORKDIR /data
VOLUME ["/data/db"]
EXPOSE 27017
But I do not know how to join the images
Thank you
Compose is a tool for defining and running multi-container Docker applications.
Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Creating one image for node and mongo means you will have both of them inside the container, up and running (more resources, harder to debug, less stable container, logs will be hard to follow, etc).
Create separate images, so you can run any image independently:
$ docker run -d -p 4000:4000 --name node myNode
$ docker run -d -p 27017:27017 --name mongo myMongo
And I strongly recommend to you using compose files, it will give you much more control over your whole environment.
I thing that the simply solution is to create your own image from mongo image and install node manually.
You can have multiple FROM in a Dockerfile :
FROM can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another. Simply make a note of the last image ID output by the commit before each new FROM instruction. Each FROM instruction clears any state created by previous instructions.
From : https://docs.docker.com/engine/reference/builder/#from

Resources