Docker-compose run service if another service status is 0 (success) - docker

I am pretty new to Docker and Docker compose.
I want to use docker compose to test my project and publish it if tests are ok. If tests are failed, it should not publish the app at all.
Here is my docker-compose.yml
version: '3'
services:
mongodb:
image: mongo
test:
build:
context: .
dockerfile: Dockerfile.tests
links:
- mongodb
publish:
build:
context: .
dockerfile: Dockerfile.publish
?? # I want to say here that publish step is dependent to test.
After that, in my testAndPublish.sh file, I would like to say:
docker-compose up
if [ $? = 0 ]; then # If all the services succeed
....
else
....
fi
So if test or publish steps are failed, I am not going to push it.
How can I build step like processes in docker-compose?
Thanks.

I think you're trying to do everything with docker-compose which is the wrong way around.
When it comes to CI (f.e. Travis or CircleCI) I always make my workflow as follows:
let's say you have a web node and database node
In travis.yml or circle.yml at the install step I always put things like f.e. docker-compose run web npm install and others
at the test step I would put docker-compose run web npm test or something similar like docker-compose run web my-test-script.sh, that way you'll know that the tests will run in the declared docker environment, if they fail this step fails and the whole test step in the CI fails which is desired
at the deploy step I would run some deploy.sh script which will build the image from Dockerfile (the one that web uses) and push it for example to Docker Hub.
This way your CI test routine still depends on specific Docker environment but the deploy push (which doesn't need Docker) is kept separately from the application which makes it more convinient imho.

Related

docker-compose wait on other service before build

There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.

docker-compose build --parallel - run command before build specific image

my docker-compose.yml file is below.
I want to build parallel those images - I am running the a command docker-compose build --parallel
BUT I want to run a command before it builds the images of service2 & service3 while building service1 - parallel.
When the command will be finished it will join to the building-parallel-process.
version: '3.4'
services:
service1:
image: "company/service1:${TAG}"
build:
context: ./folder/service1/
dockerfile: Dockerfile
service2:
image: "company/service2:${TAG}"
build:
context: ./folder/service2/
dockerfile: Dockerfile
service3:
image: "company/service3:${TAG}"
build:
context: ./folder/service3
dockerfile: Dockerfile
Compose doesn't really have any sort of workflow handling like this, especially around building images. It's assumed that building an image only depends on the local source tree and nothing else. Compose also doesn't have any ability to run non-Docker commands or launch temporary containers as part of the up workflow.
The good news is that re-running a build is very quick if nothing has changed. So with the workflow you've described, you might separately build the first image, run the command, and then rebuild everything; re-rebuilding the first image will take almost no time and you won't get a new image.
#!/bin/sh
# Build the one image that needs special handling
docker-compose build service1
# Run the command
the_command
# Rebuild everything in parallel (service1 will be a no-op)
docker-compose build --parallel
If you can run the preparatory step in a Dockerfile RUN command that might be easier to manage. If that needs software that isn't ordinarily part of your image, you could use a multi-stage build to do it in effectively a temporary image.

Docker Compose as a CI pipeline

So we use Gitlab CI. The issue was the pain of having to commit each time we want to test wether or not our build pipeline was configured correctly. Unfortunately no way to easily test Gitlab CI locally when our containers/pipeline ain't workin right.
Our solution, use docker-compose.yml as a CI pipeline runner for local testing of containerized build steps, why not ya know . . . ? Basically Gitlab CI, and most others, have each section spawn a container to run a command and won't continue until the preceding steps complete, i.e. the first step must fully complete and then the next step happens.
Here is a simple .gitlab-ci.yml file we use:
stages:
- install
- test
cache:
untracked: true
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
install:
image: node:10.15.3
stage: install
script: npm install
test:
image: node:10.15.3
stage: test
script:
- npm run test
dependencies:
- install
Here is the docker-compose.yml file we converted it to:
version: "3.7"
services:
install:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- install
volumes:
- .:/home/node:Z
test:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- run
- test
volumes:
- .:/home/node:Z
depends_on:
- install
OK, now for the real issue here. The depends_on part of the compose file doesn't wait for the install container to finish, it just waits for the npm command to be running. Therefore, once the npm command is officially loaded up and running, the test container will start running and complain there are no node_modules yet. This happens because npm is running does not mean the npm command has actually finished.
Anyone know any tricks to better control what docker considers to be done. All the solutions I looked into where using some kind of wrapper script which watched some port on the internal docker network to wait for a service, like a db, to be fully turned on and ready.
When using k8s I can setup a readiness probe which is super dope, doesn't seem to be a feature of Docker Compose though. Am I wrong here? Would be nice to just write a command which docker uses to determine what done means.
For now we must run each step manually and then run the next when the preceding step is complete like so:
docker-compose up install
wait ....
docker-compose up test
We really just want to say:
docker-compose up
and have all the steps complete in correct order by waiting for preceding steps.
I went through the same issue, this is a permission related thing when you are mapping from your local machine to docker.
volumes:
- .:/home/node:Z
Create a file inside the container, and check the permission of this same file in your local machine, if you see the root user or anything else is the owner, instead of your current user, you have to run first
export DOCKER_USER="$(id -u):$(id -g)"
and change
user: node
by
user: $DOCKER_USER
PS: I'm assuming you can run docker without having to use sudo, just mentioning this bc this is the scenario I have.
This question was many years ago. I now use this project: https://github.com/firecow/gitlab-ci-local
It runs your Gitlab Pipeline locally using docker just as you would expect it to run.

How to run protractor on docker?

I'm a newb with docker & protractor so please bear with me.
I have an app that uses python and django for its backend API, and angular.js for its frontend and e2e test with protractor. So this is how I think I should proceed:
I must set a docker container for my backend which is in Python-Django, then expose this API through some PORT.
Create another container (or a layer not sure which) for the angular.js frontend.
Download an image for protractor and build the container.
Connect all of this containers layer through docker network?
Alternative
Run backend on local machine.
Create docker container for protractor and somehow point the e2e test to the container?
Please help me review the steps to achieve this. This video gives some insight but not sure where to start.
Your initial idea is just about right. When setting this up, I typically use a docker-compose file like so...
#docker-compose.yml
version: '2'
services:
backend:
build: ./backend
command: <your django startup command>
db:
image: <postgres or whatever>
frontend:
build: ./frontend
command: <npm start or equivalent>
ports:
- "80:80"
Then, I would run my tests with
docker-compose run --rm frontend <MY TESTING COMMAND HERE>
Docker-compose handles the docker networking stuff for you- in that case your frontend would be able to access your backend at http://backend:. Protractor and npm and all that fun stuff is installed in your frontend container.
The one major pain point that you haven't thought of yet is that protractor requires a display to work- it won't work with a headless browser like phantomjs, which your docker containers will usually not provide. This repo is an example of how to install a real browser and provide it a fake display so that it will work in a container... https://github.com/mark-adams/docker-chromium-xvfb, basically replace the chrome startup script with a shell script that starts an xvfb interface and attaches the browser to it.

Pull image with docker-compose run

Here's a typical docker-compose file. I use is it both for building image (docker-compose build) and to run my tests (docker-compose run test ).
version: '2'
services:
test :
links:
- web
cmd : "mvn clean verify"
web:
image: my_repo/my_image:tag
build: .
When I use the run command docker-compose try to build the image before running the test.
Is there anyway to force it to pull existing image instead of trying to build new one ?
You can use "pull" command before run. There is pull all new images from registry
docker-compose pull
docker-compose run
Both of your solutions work fine.
I was just expecting to have to have some thing like
'docker run test --pull' or 'docker dun test --build' to force the pull/build.
Thanks !
It's normal that it's build the web image before creating the test container, because there's link between (web depends on test). If you want to not do the build each time you run docker-compose up, start by creating your web image:
docker build -t web .
then update your Dockerfile with the new image:
version: '2'
services:
test :
links:
- web
cmd : "mvn clean verify"
web:
image: web

Resources