Docker Compose Continuous Deployment setup - docker

I am looking for a way to deploy docker-compose images and / or builds to a remote sever, specifically but not limited to a DigitalOcean VPS.
docker-compose is currently working on the CircleCI Continuous Integration service, where it automatically verifies that tests pass. But, it should deploy automatically on success.
My docker-compose.yml is looking like this:
version: '2'
services:
web:
image: name/repo:latest
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
depends_on:
- mongo
- redis
mongo:
image: mongo
command: --smallfiles
volumes:
- ./data/mongodb:/data/db
redis:
image: redis
volumes:
- ./data/redis:/data
docker-compose.override.yml:
version: '2'
services:
web:
build: .
circle.yml relevant part:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest

Your docker-compose and circle configurations are already looking pretty good.
Your docker-compose.yml is already setup to gather the image from the Docker Hub, which is being uploaded after tests have passed. We will use this image on the remote server, which instead of building the image up every time (which takes a long time), we'll use this already prepared one.
You did well into separating the build: . into a docker-compose.override.yml file, as priority issues can arise if we use a docker-compose.prod.yml file.
Let's get started with the deployment:
There are various ways of getting your deployment done. The most popular ones are probably SSH and Webhooks.
We'll use SSH.
Edit your circle.yml config to take an additional step, which to load our .scripts/deploy.sh bash file:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest
- .scripts/deploy.sh
deploy.sh will contain a few instructions to connect into our remote server through SSH and update both the repository and Docker images and reload Docker Compose services.
Prior executing it, you should have a remote server that contains your project folder (i.e. git clone https://github.com/zurfyx/my-project), and both Docker and Docker Compose installed.
deploy.sh
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
(
cd "$DIR/.." # Go to project dir.
ssh $SSH_USERNAME#$SSH_HOSTNAME -o StrictHostKeyChecking=no <<-EOF
cd $SSH_PROJECT_FOLDER
git pull
docker-compose pull
docker-compose stop
docker-compose rm -f
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
EOF
)
Notice: last EOF is not indented. That's how bash HEREDOC works.
deploy.sh steps explained:
ssh $SSH_USERNAME#$SSH_HOSTNAME: connects to the remote host through SSH. -o StrictHostChecking=no avoids the SSH asking whether we trust the server.
cd $SSH_PROJECT_FOLDER: browses to the project folder (the one you did gather through git clone ...)
git pull: updates project folder. That's important to keep docker-compose / Dockerfile updated, as well as any shared volume that depends on some source code file.
docker-compose stop: Our remote dependencies have just been downloaded. Stop the docker-compose services which are current running.
docker-compose rm -f: Remove docker-compose services. This step is really important, otherwise we'll reuse old volumes.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. Execute your docker-compose.prod.yml which extends docker-compose.yml in detached mode.
On your CI you will need to fill in the following environment variables (that the deployment script uses):
$SSH_USERNAME: your SSH username (i.e. root)
$SSH_HOSTNAME: your SSH hostname (i.e. stackoverflow.com)
$SSH_PROJECT_FOLDER: the folder where the project is stored (either relative or absolute to where the $SSH_USERNAME is on login. (i.e. my-project/)
What about the SSH password? CircleCI in this case offers a way to store SSH keys, so password is no longer needed when logging in through SSH.
Otherwise simply edit the deploy.sh SSH connection to something like this:
sshpass -p your_password ssh user#hostname
More about SSH password here.
In conclusion, all we had to do was to create a script that connected with our remote server to let it know that the source code had been updated. Well, and to perform the appropriate upgrading steps.
FYI, that's similar to how the alternative Webhooks method work.

WatchTower solves this for you.
https://github.com/v2tec/watchtower
Your CI just needs to build the images and push to the registry. Then WatchTower polls the registry every N seconds and automagically restarts your services using the latest and greatest images. It's as simple as adding this code to your compose yaml:
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30

Related

How to run multiples docker container when run docker-compose up ( gitlab-ci)

I need to deploy a new container each time that i do "docker-compose up" because the container will run a SQL SERVER database in a Gitlab pipeline for each merge request that will be created in the repository.
Is there a flag that should be passed to do this? I know the --force-recreate, but it recreate the SAME container. I neeed to every time to the command docker-compose up been called to create another container with the same configurations.
There is the --scale SERVICE=NUM, but it is not what i need. Why? because when i scale i can not control which host port docker will grab and use.
how do i intend to do this? By a environment variable. Look:
docker-compose file
version: '2'
services:
db:
image: mcr.microsoft.com/mssql/server:2019-latest
container_name: ${CI_PIPELINE_ID}
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=${DATABASE_PASSWORD}
ports:
- "${CI_PIPELINE_ID}:1433"
my gitlab-ci:
stages:
- database_deploy
- build_and_test
- database_stop
database_deploy:
image: docker:latest
stage: database_deploy
services:
- name: docker
script:
- apk add py-pip
- pip install docker-compose==1.8.0
- cd ./docker; docker-compose up -d; docker ps
build_and_test:
image: maven:latest
stage: build_and_test
script:
- mvn test -Dquarkus.test.profile=homolog
- mvn checkstyle:check
artifacts:
paths:
- target
database_stop: &database_stop
image: docker:latest
stage: database_stop
services:
- name: docker
script:
- docker stop $CI_PIPELINE_ID
- docker rm -f $CI_PIPELINE_ID
- docker ps
cleanup_deployment_failure:
needs: ["build_and_test"]
when: on_failure
<<: *database_stop
Docker-compose groups your services in "projects". By default, the project name is the name of the directory that contains your docker-compose.yml file. When you run docker up, docker-compose will create any containers in the project that don't already exist.
Since you want docker-compose up to create new containers every time -- with different configurations -- you need to tell docker-compose that it's running in a different project each time. You can do this with the --project-name (-p) flag.
For example, let's say I have this docker-compose.yml:
version: "3"
services:
web:
image: "alpinelinux/darkhttpd"
ports:
- "${HOSTPORT}:8080"
I can bring up multiple instances of this stack by setting HOSTPORT and specifying a project name for each invocation of docker-compsoe:
$ HOSTPORT=8081 docker-compose -p project1 up -d
$ HOSTPORT=8082 docker-compose -p project2 up -d
After running those two commands, we see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
825ea98cca55 alpinelinux/darkhttpd "darkhttpd /var/www/…" 4 seconds ago Up 3 seconds 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp project2_web_1
776c12d38bbb alpinelinux/darkhttpd "darkhttpd /var/www/…" 9 seconds ago Up 8 seconds 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp project1_web_1
And I think that's exactly what you're looking for.
Note that with this configuration, you will need to specify the project name and a value for HOSTPORT every time you run docker-compose.
You can also set the project name using the COMPOSE_PROJECT_NAME environment variable. This means you can actually organize things using environment files.
We can reproduce the above behavior by creating project1.env with:
COMPOSE_PROJECT_NAME=project1
HOSTPORT=8081
And project2.env with:
COMPOSE_PROJECT_NAME=project2
HOSTPORT=8082
And then running:
$ docker-compose --env-file project1.env up -d
$ docker-compose --env-file project2.env up -d
As before, you'll need to provide --env-file every time you run docker-compose.

Docker containers refuse to communicate when running docker-compose in dind - Gitlab CI/CD

I am trying to set up some integration tests in Gitlab CI/CD - in order to run these tests, I want to reconstruct my system (several linked containers) using the Gitlab runner and docker-compose up. My system is composed of several containers that communicate with each other through mqtt, and an InfluxDB container which is queried by other containers.
I've managed to get to a point where the runner actually executes the docker-compose up and creates all the relevant containers. This is my .gitlab-ci.yml file:
image: docker:19.03
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- name: docker:19.03-dind
alias: localhost
before_script:
- docker info
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
As you can see, I am installing docker-compose, running compose up on my config yml file and then executing my integration tests from within one of the containers. When I run that final line on my local system, the integration tests run as expected; in the CI/CD environment, however, all the tests throw some variation of ConnectionRefusedError: [Errno 111] Connection refused errors. Running docker-compose ps seems to show all the relevant containers Up and healthy.
I have found that the issues stem from every time one container tries to communicate with another, through lines like self.localClient = InfluxDBClient("influxdb", 8086, database = "replay") or client.connect("mosquitto", 1883, 60). This works fine on my local docker environment as the address names resolve to the other containers that are running, but seems to be creating problems in this Docker-in-Docker setup. Does anyone have any suggestions? Do containers in this dind environment have different names?
It is also worth mentioning that this could be a problem with my docker-compose.yml file not being configured correctly to start healthy containers. docker-compose ps suggests they are up, but is there a better way to check whether they are running correctly? Here's an excerpt of my docker-compose file:
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
volumes:
- ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web:
influxnet:
internal: true
brokernet:
driver: bridge
internal: true
There are a few possibilities to why this error is occurring:
A bug on Docker 19.03-dind is known to be problematic and unable to create networks when using services without a proper TLS setup, have you correctly set up your Gitlab Runner with TLS certificates? I've noticed you are using "/certs"on your gitlab-ci.yml, did you mount your runner to share the volume where the certificates are stored?
If your Gitlab Runner is not running with privileged permissions or correctly configured to use the remote machine's network socket, you won't be able to create networks. A simple solution to unify your networks to run in a CI/CD environment is to configure your machine using this docker-compose followed by this script. (Source) It'll setup a local network where you can communicate between containers using hostnames in a network where the network driver is bridged.
There's an issue with gitlab-ci.yml as well, when you execute this part of the script:
services:
- name: docker:19.03-dind
alias: localhost
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
You're renaming your docker hostname to localhost, but you never use it, instead you type directly to use the docker and docker-compose from your image, binding them to a different network set of networks than the ones created by Gitlab automatically.
Let's try this solution (Albeit I couldn't test it right now so I apologize if it doesn't work right away):
gitlab-ci.yml
image: docker/compose:debian-1.28.5 # You should be running as a privileged Gitlab Runner
services:
- docker:dind
integration-tests:
stage: test
script:
#- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
docker-compose.yml
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
# volumes: You're mounting your volume to an ephemeral folder, which is in the CI pipeline and will be wiped afterwards (if you're using Docker-DIND)
# - ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web: # hostnames are created automatically, you don't need to specify a local setup through localhost
influxnet:
brokernet:
driver: bridge #If you're using a bridge driver, an overlay2 doesn't make sense
Both of this commands will install a Gitlab Runner as Docker containers without the hassle of having to configure them manually to allow for socket binding on your project.
(1):
docker run --detach --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest
And then (2):
docker run --rm -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register --non-interactive --description "monitoring cluster instance" --url "https://gitlab.com" --registration-token "replacethis" --executor "docker" --docker-image "docker:latest" --locked=true --docker-privileged=true --docker-volumes /var/run/docker.sock:/var/run/docker.sock
Remember to change your token on the (2) command.

Gitlab docker backup and restore

I am using GitLab via docker on an intranet disconnected from the internet.
I run GitLab docker using docker-compose following yml file.
web:
image: 'gitlab/gitlab-ee:latest'
restart: always
hostname: 'myowngit.com'
ports:
- 8880:80
- 8443:443
volumes:
- /srv/gitlab/config:/etc/gitlab
- /srv/gitlab/logs:/var/log/gitlab
- /srv/gitlab/data:/var/opt/gitlab
Then free space of 'volumes' is not enough so i move this path to '/mnt/mydata'.
And I modify docker-compose.yml file.
... ... ...
volumes:
- /mnt/mydata/gitlab/config:/etc/gitlab
- /mnt/mydata/gitlab/logs:/var/log/gitlab
- /mnt/mydata/gitlab/data:/var/opt/gitlab
To start GitLab service run sudo docker-compose up -d.
After running the GitLab service I try to explore the project repository but the repository is not found(HTTP response 404 or 503).
What is the reason?
How to move GitLab docker volume directory?
It should work unless, as shown in docker-gitlab issue 562, to move was done with a different ownership
It should be okay to move the files from /data1/data to /data2/data, you should take a little care while copying the files to the new location. i.e. either of these should be fine
cp -a /data1/data /data2/data
rsync --progress -av /data1/data /data2/data
Simply doing cp -r /data1/data /data2/data will not preserve the ownership of the files which will cause issues.

How to deploy container using docker-compose to google cloud?

i'm quite new to GCP and been using mostly AWS. I am currently trying to play around with GCP and want to deploy a container using docker-compose.
I set up a very basic docker-compose.yml file as follows:
# docker-compose.yml
version: '3.3'
services:
git:
image: alpine/git
volumes:
- ${PWD}:/git
command: "clone https://github.com/PHP-DI/demo.git"
composer:
image: composer
volumes:
- ${PWD}/demo:/app
command: "composer install"
depends_on:
- git
web:
image: php:7.4-apache
ports:
- "8080:${PORT:-80}"
- "8000:${PORT:-8000}"
volumes:
- ${PWD}/demo:/var/www/html
command: php -S 0.0.0.0:8000 -t /var/www/html
depends_on:
- composer
So the container will get the code from git, then install the dependencies using composer and finally be available on port 8000.
On my machine, running docker-compose up does everything. However how can push this docker-compose to google cloud.
I have tried building a container using the docker/compose image and a Dockerfile as follows:
FROM docker/compose
WORKDIR /opt
COPY docker-compose.yml .
WORKDIR /app
CMD docker-compose -f /opt/docker-compose.yml up web
Then push the container to the registry. And from there i tried deploying to:
cloud run - did not work as i could not find a way to specify mounted volume for /var/run/docker.sock
Kubernetes - i mounted the docker.sock but i keep getting an error in the logs that /app from the git service is read only
compute engine - same error as above
I don't want to make a container by copying all local files into it then upload, as the dependencies could be really big thus making a heavy container to push.
I have a working docker-compose and just want to use it on GCP. What's the easiest way?
This can be done by creating a cloudbuild.yaml file in your project root directory.
Add the following step to cloudbuild.yaml:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
On Google Cloud Platform > Cloud Builder : configure the file type of your build configuration as Cloud Build configuration file (yaml or json), enter the file location : cloudbuild.yaml
If the repository event that invokes trigger is set to "push to a branch" then Cloud Build will launch docker-compose.yml to build your containers.
Take a look at Kompose. It can help you convert the docker compose instructions into Kuberenetes specific deployment and services. You can then apply the Kubernetes files against your GKE Clusters. Note that you will have to build the containers and store in Container Registry first and update the image tag in service definitions accordingly.
If you are trying to setup same as on-premise VM in GCE, you can install these and run. Ref: https://dev.to/calvinqc/the-easiest-docker-docker-compose-setup-on-compute-engine-1op1

Difference between docker-compose and manual commands

What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting

Resources