I have a docker-compose.yml file which specifies two services aaa and bbb as follows,
version: "3.4"
services:
aaa:
platform: linux/amd64
build: .
image: aaa
environment:
- ENV_VAR=1
volumes:
- ./data:/root/data
ports:
- 5900:5900
restart: on-failure
bbb:
image: bbb
build: ./service_directory
platform: linux/amd64
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./data:/root/data
ports:
- 5901:5901
restart: on-failure
depends_on:
- aaa
I'm hoping to run both the above services simultaenously on a google cloud machine VM via cloudbuild.yaml which reads,
steps:
- name: 'gcr.io/$PROJECT_ID/docker-compose'
args: ['up']
tags: ['cloud-builders-community']
My deployment script looks like
#!/bin/bash
container=mycontainer # container name
pid=my-nginx-363907 # process id
zone=us-west4-b
instance=instance-${zone} # instance name
gcloud builds submit \
--tag gcr.io/${pid}/${container} \
--project=${pid}
gcloud compute instances create-with-container ${instance} \
--zone=${zone} \
--tags=http-server,https-server \
--machine-type=e2-micro \
--container-image gcr.io/${pid}/${container} \
--project=${pid}
gcloud compute instances list --project=${pid}
Here's my directory structure:
project
| cloudbuild.yaml
| docker-compose.yml
| Dockerfile
|
|--service_directory
|
|--Dockerfile
The docker compose up command does kick in, but it appears to build only service aaa, and not bbb. What's worse is that the service aaa does not actually appear to run or have been installed in the VM instance. This is despite messages of apparent success:
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
ab4785dd-7c4e-413d-acf6-1fdc64308387 2022-09-29T11:25:28+00:00 6M17S gs://my-nginx-363907_cloudbuild/source/1664450585.644241-04488e692b644a6186d922270dfbe667.tgz gcr.io/my-nginx-363907/aaa (+1 more) SUCCESS
Can someone please explain how to run both the services on Google Cloud Compute Engine VM as specified by the docker-compose.yml file?
You probably need not use Cloud Build if you just want to run Docker Compose.
Cloud Build is often used (but not limited to) building container images.
You could (but need not) use Cloud Build to build the 2 container images that your docker-compose.yaml uses (aaa, bbb) but you would need to revise the cloudbuild.yaml to just perform the build and push steps and then you'd need to revise docker-compose.yaml to consume the images produced by Cloud Build.
I think you should create a Compute Engine VM ensure that Docker, Docker Compose and your build content (.) are available and then run your docker-compose.yaml as you would on any Linux machine.
Related
I have a few microservices. Jenkins builds these projects, creates docker images and publishes them to the artifactory.
I have another project for automation testing which is using these docker images.
We have a docker-compose file that has all the configuration of all microservice images.
Following is sample docker-compose
version: "1.0"
services:
my-service:
image: .../my-service:1.0.0-SNAPSHOT-1
container_name: 'my-service'
restart: always
volumes:
...
ports:
...
...
all these are working fine.
Now to update the image then I have to manually change the image tag (1.0.0-SNAPSHOT-2) in docker-compose.
This is an issue because this involves human intervention. Is there any way to pull the newest docker image without any change in docker-compose?
NOTE - I cannot create images with the latest tag. Getting issue when publishing image with the same name in the artifactory (unauthorized: The client does not have permission for manifest: Not enough permissions to delete/overwrite artifact).
Well What actually you can do is, use environment variables substitutions in cli commands (envsubst). Let me explain an escenario as example.
First in the docker-compose.yaml you define an environmet variable, as a tag of the container
version: "3"
services:
my-service:
image: .../my-service:$TAG
container_name: 'my-service'
restart: always
volumes:
...
ports:
...
...
Second, with cli command (or terminal) you define an environment variable, with you version. This part is important because here you add your version tag to the container (and you can execute bash commands to extract some id, or last git commit or what ever you want to execute as tag for the container, i give you some ideas)
export TAG=1.0.0-SNAPSHOT-1
export TAG="$(bash /path/to/script/tag.sh)"
export TAG="$(git log --format="%H" -n 1)"
And the third part and last one is for execute "envsubst" and then execute docker-compose.yaml to deploy your container. Note the pipe |, very important for execution.
envsubst < docker-compose.yaml | docker-compose up -d
link to envsubst
I use this format to deploy tagged containers in kubernetes, but the idea must be the same with docker compose.
envsubst < deployment.yaml | kubectl apply -f -
And change version to 3 in the docker-compose.yaml. Good luck
I need to deploy a new container each time that i do "docker-compose up" because the container will run a SQL SERVER database in a Gitlab pipeline for each merge request that will be created in the repository.
Is there a flag that should be passed to do this? I know the --force-recreate, but it recreate the SAME container. I neeed to every time to the command docker-compose up been called to create another container with the same configurations.
There is the --scale SERVICE=NUM, but it is not what i need. Why? because when i scale i can not control which host port docker will grab and use.
how do i intend to do this? By a environment variable. Look:
docker-compose file
version: '2'
services:
db:
image: mcr.microsoft.com/mssql/server:2019-latest
container_name: ${CI_PIPELINE_ID}
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=${DATABASE_PASSWORD}
ports:
- "${CI_PIPELINE_ID}:1433"
my gitlab-ci:
stages:
- database_deploy
- build_and_test
- database_stop
database_deploy:
image: docker:latest
stage: database_deploy
services:
- name: docker
script:
- apk add py-pip
- pip install docker-compose==1.8.0
- cd ./docker; docker-compose up -d; docker ps
build_and_test:
image: maven:latest
stage: build_and_test
script:
- mvn test -Dquarkus.test.profile=homolog
- mvn checkstyle:check
artifacts:
paths:
- target
database_stop: &database_stop
image: docker:latest
stage: database_stop
services:
- name: docker
script:
- docker stop $CI_PIPELINE_ID
- docker rm -f $CI_PIPELINE_ID
- docker ps
cleanup_deployment_failure:
needs: ["build_and_test"]
when: on_failure
<<: *database_stop
Docker-compose groups your services in "projects". By default, the project name is the name of the directory that contains your docker-compose.yml file. When you run docker up, docker-compose will create any containers in the project that don't already exist.
Since you want docker-compose up to create new containers every time -- with different configurations -- you need to tell docker-compose that it's running in a different project each time. You can do this with the --project-name (-p) flag.
For example, let's say I have this docker-compose.yml:
version: "3"
services:
web:
image: "alpinelinux/darkhttpd"
ports:
- "${HOSTPORT}:8080"
I can bring up multiple instances of this stack by setting HOSTPORT and specifying a project name for each invocation of docker-compsoe:
$ HOSTPORT=8081 docker-compose -p project1 up -d
$ HOSTPORT=8082 docker-compose -p project2 up -d
After running those two commands, we see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
825ea98cca55 alpinelinux/darkhttpd "darkhttpd /var/www/…" 4 seconds ago Up 3 seconds 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp project2_web_1
776c12d38bbb alpinelinux/darkhttpd "darkhttpd /var/www/…" 9 seconds ago Up 8 seconds 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp project1_web_1
And I think that's exactly what you're looking for.
Note that with this configuration, you will need to specify the project name and a value for HOSTPORT every time you run docker-compose.
You can also set the project name using the COMPOSE_PROJECT_NAME environment variable. This means you can actually organize things using environment files.
We can reproduce the above behavior by creating project1.env with:
COMPOSE_PROJECT_NAME=project1
HOSTPORT=8081
And project2.env with:
COMPOSE_PROJECT_NAME=project2
HOSTPORT=8082
And then running:
$ docker-compose --env-file project1.env up -d
$ docker-compose --env-file project2.env up -d
As before, you'll need to provide --env-file every time you run docker-compose.
I am trying to set up some integration tests in Gitlab CI/CD - in order to run these tests, I want to reconstruct my system (several linked containers) using the Gitlab runner and docker-compose up. My system is composed of several containers that communicate with each other through mqtt, and an InfluxDB container which is queried by other containers.
I've managed to get to a point where the runner actually executes the docker-compose up and creates all the relevant containers. This is my .gitlab-ci.yml file:
image: docker:19.03
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- name: docker:19.03-dind
alias: localhost
before_script:
- docker info
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
As you can see, I am installing docker-compose, running compose up on my config yml file and then executing my integration tests from within one of the containers. When I run that final line on my local system, the integration tests run as expected; in the CI/CD environment, however, all the tests throw some variation of ConnectionRefusedError: [Errno 111] Connection refused errors. Running docker-compose ps seems to show all the relevant containers Up and healthy.
I have found that the issues stem from every time one container tries to communicate with another, through lines like self.localClient = InfluxDBClient("influxdb", 8086, database = "replay") or client.connect("mosquitto", 1883, 60). This works fine on my local docker environment as the address names resolve to the other containers that are running, but seems to be creating problems in this Docker-in-Docker setup. Does anyone have any suggestions? Do containers in this dind environment have different names?
It is also worth mentioning that this could be a problem with my docker-compose.yml file not being configured correctly to start healthy containers. docker-compose ps suggests they are up, but is there a better way to check whether they are running correctly? Here's an excerpt of my docker-compose file:
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
volumes:
- ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web:
influxnet:
internal: true
brokernet:
driver: bridge
internal: true
There are a few possibilities to why this error is occurring:
A bug on Docker 19.03-dind is known to be problematic and unable to create networks when using services without a proper TLS setup, have you correctly set up your Gitlab Runner with TLS certificates? I've noticed you are using "/certs"on your gitlab-ci.yml, did you mount your runner to share the volume where the certificates are stored?
If your Gitlab Runner is not running with privileged permissions or correctly configured to use the remote machine's network socket, you won't be able to create networks. A simple solution to unify your networks to run in a CI/CD environment is to configure your machine using this docker-compose followed by this script. (Source) It'll setup a local network where you can communicate between containers using hostnames in a network where the network driver is bridged.
There's an issue with gitlab-ci.yml as well, when you execute this part of the script:
services:
- name: docker:19.03-dind
alias: localhost
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
You're renaming your docker hostname to localhost, but you never use it, instead you type directly to use the docker and docker-compose from your image, binding them to a different network set of networks than the ones created by Gitlab automatically.
Let's try this solution (Albeit I couldn't test it right now so I apologize if it doesn't work right away):
gitlab-ci.yml
image: docker/compose:debian-1.28.5 # You should be running as a privileged Gitlab Runner
services:
- docker:dind
integration-tests:
stage: test
script:
#- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
docker-compose.yml
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
# volumes: You're mounting your volume to an ephemeral folder, which is in the CI pipeline and will be wiped afterwards (if you're using Docker-DIND)
# - ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web: # hostnames are created automatically, you don't need to specify a local setup through localhost
influxnet:
brokernet:
driver: bridge #If you're using a bridge driver, an overlay2 doesn't make sense
Both of this commands will install a Gitlab Runner as Docker containers without the hassle of having to configure them manually to allow for socket binding on your project.
(1):
docker run --detach --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest
And then (2):
docker run --rm -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register --non-interactive --description "monitoring cluster instance" --url "https://gitlab.com" --registration-token "replacethis" --executor "docker" --docker-image "docker:latest" --locked=true --docker-privileged=true --docker-volumes /var/run/docker.sock:/var/run/docker.sock
Remember to change your token on the (2) command.
So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.
I am looking for a way to deploy docker-compose images and / or builds to a remote sever, specifically but not limited to a DigitalOcean VPS.
docker-compose is currently working on the CircleCI Continuous Integration service, where it automatically verifies that tests pass. But, it should deploy automatically on success.
My docker-compose.yml is looking like this:
version: '2'
services:
web:
image: name/repo:latest
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
depends_on:
- mongo
- redis
mongo:
image: mongo
command: --smallfiles
volumes:
- ./data/mongodb:/data/db
redis:
image: redis
volumes:
- ./data/redis:/data
docker-compose.override.yml:
version: '2'
services:
web:
build: .
circle.yml relevant part:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest
Your docker-compose and circle configurations are already looking pretty good.
Your docker-compose.yml is already setup to gather the image from the Docker Hub, which is being uploaded after tests have passed. We will use this image on the remote server, which instead of building the image up every time (which takes a long time), we'll use this already prepared one.
You did well into separating the build: . into a docker-compose.override.yml file, as priority issues can arise if we use a docker-compose.prod.yml file.
Let's get started with the deployment:
There are various ways of getting your deployment done. The most popular ones are probably SSH and Webhooks.
We'll use SSH.
Edit your circle.yml config to take an additional step, which to load our .scripts/deploy.sh bash file:
deployment:
latest:
branch: master
commands:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- docker push name/repo:$CIRCLE_SHA1
- docker push name/repo:latest
- .scripts/deploy.sh
deploy.sh will contain a few instructions to connect into our remote server through SSH and update both the repository and Docker images and reload Docker Compose services.
Prior executing it, you should have a remote server that contains your project folder (i.e. git clone https://github.com/zurfyx/my-project), and both Docker and Docker Compose installed.
deploy.sh
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
(
cd "$DIR/.." # Go to project dir.
ssh $SSH_USERNAME#$SSH_HOSTNAME -o StrictHostKeyChecking=no <<-EOF
cd $SSH_PROJECT_FOLDER
git pull
docker-compose pull
docker-compose stop
docker-compose rm -f
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
EOF
)
Notice: last EOF is not indented. That's how bash HEREDOC works.
deploy.sh steps explained:
ssh $SSH_USERNAME#$SSH_HOSTNAME: connects to the remote host through SSH. -o StrictHostChecking=no avoids the SSH asking whether we trust the server.
cd $SSH_PROJECT_FOLDER: browses to the project folder (the one you did gather through git clone ...)
git pull: updates project folder. That's important to keep docker-compose / Dockerfile updated, as well as any shared volume that depends on some source code file.
docker-compose stop: Our remote dependencies have just been downloaded. Stop the docker-compose services which are current running.
docker-compose rm -f: Remove docker-compose services. This step is really important, otherwise we'll reuse old volumes.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. Execute your docker-compose.prod.yml which extends docker-compose.yml in detached mode.
On your CI you will need to fill in the following environment variables (that the deployment script uses):
$SSH_USERNAME: your SSH username (i.e. root)
$SSH_HOSTNAME: your SSH hostname (i.e. stackoverflow.com)
$SSH_PROJECT_FOLDER: the folder where the project is stored (either relative or absolute to where the $SSH_USERNAME is on login. (i.e. my-project/)
What about the SSH password? CircleCI in this case offers a way to store SSH keys, so password is no longer needed when logging in through SSH.
Otherwise simply edit the deploy.sh SSH connection to something like this:
sshpass -p your_password ssh user#hostname
More about SSH password here.
In conclusion, all we had to do was to create a script that connected with our remote server to let it know that the source code had been updated. Well, and to perform the appropriate upgrading steps.
FYI, that's similar to how the alternative Webhooks method work.
WatchTower solves this for you.
https://github.com/v2tec/watchtower
Your CI just needs to build the images and push to the registry. Then WatchTower polls the registry every N seconds and automagically restarts your services using the latest and greatest images. It's as simple as adding this code to your compose yaml:
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30