docker wait in gitlab ci doesn't fail job - docker

I'm using docker in docker in my gitlab ci to run my test env and run tests in it.
In order to wait until all tests are finished I use docker wait.
tests:
image: docker:19-git
stage: tests
script:
- docker-compose -f docker-compose/my_test_env.yml up -d #setting up env
- docker-compose -f docker-compose/tests.yml up -d #running tests
- docker wait docker-compose-services_tests_1
I need to fail job if there are some problems with tests, but (docker wait docker-compose-services_tests_1) prints container exit code and returning this exit code is considered as success operation, so job is considered as passed. docker wait doesn't have option not to print exit code.
So I need some sh (not bash) script to run docker wait and exit with non 0 exit code if container returns non 0 exit code (to fail the job).
What is the correct way to do this?

You have two options here
Removing the -d in the second command docker-compose -f docker-compose/tests.yml up, doing this you do not need to use the third step
if you really want to run the compose dettached, just add a script that captures the output of docker-compose log and according to the result, take an action, exit 0 or 1

Related

Why does AWS Copilot Task Run always return exit code 0 without --follow option?

I have a Docker image setup to run a simple script that I am running via copilot task run.
FROM node:12-alpine
RUN apk update
RUN apk add curl
RUN apk add jq
RUN apk add --no-cache aws-cli
COPY deploy-permissions.sh /usr/local/bin/deploy-permissions.sh
RUN chmod +x /usr/local/bin/deploy-permissions.sh
ENTRYPOINT ["/usr/local/bin/deploy-permissions.sh"]
When I run it via copilot task run with the --follow flag, it shows me all the log output and returns the exit code correctly.
So if I run a scenario when I know it will fail, I get
copilot task run --image %URLTOImage% --follow
echo $? (reports 1 correctly)
However, if I don't pass in --follow the command seems to complete much quicker and the exit status code is 0 regardless of whether the docker container's entrypoint script succeeds or not.
copilot task run --image %URLToImage%
echo $? (always reports 0)
The documentation says that --follow should just stream the logs, nothing about it not waiting for completion.
Am I missing something here? Why would this happen? It's causing me problems because our CI/CD pipeline is not liking the --follow option. If I could run the task without it, it'd save me some grief; however, I need the command to wait for task completion and correctly report the error code. The pipeline is currently always reporting success, which is a non-starter. If I do use --follow the Codebuild project says it the task never reaches a ready-state.
Thanks!
The exit code being returned when you use the --follow flag is that of the task. I believe that without the --follow flag, the exit code is that of the overarching process.
See the request here: https://github.com/aws/copilot-cli/issues/2525 and the implementation here: https://github.com/aws/copilot-cli/pull/3620. There are some interesting ideas in the issue discussion that may help you use the command in your CI/CD pipeline.

Docker build: returned a non-zero code: 1, when test is failed

When I run Docker build with my project Docker+Selenium+Pytest in Jenkins CI with tests that end with the SUСCESS status - the build is pushed and the results are published to reports, and if at least one test fails - the build fails and the results are not published
Build Error: The command 'pytest test_page.py -s -v --alluredir=reports/allure-results' returned a non-zero code: 1
Maybe my instructions for Docker are incorrectly configured.
My DockerFile
FROM python:latest as python3
FROM selenium/standalone-chrome
USER root
WORKDIR /my-projest
ADD . /my-projest
RUN pip3 install --no-cache-dir --user -r requirements.txt
RUN sudo pip3 install pytest
RUN ["pytest", "test_page.py", "-s", "-v", "--alluredir=reports/allure-results"]
and SHELL Command
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
docker run -d --name $CONTAINER_NAME $IMAGE_NAME
echo "Copy allure-results into Jenkins container"
rm -rf reports; mkdir reports;
docker cp $CONTAINER_NAME:my-project/reports/allure-results reports
It may be that your tests are failing on an assertion and that failed assertion may be throwing the non 0 error code.
this link outlines the expected exit codes for each scenario
Exit code 0
All tests were collected and passed successfully
Exit code 1
Tests were collected and run but some of the tests failed
Exit code 2
Test execution was interrupted by the user
Exit code 3
Internal error happened while executing tests
Exit code 4
pytest command line usage error
Exit code 5
No tests were collected
Problem is when testcases are failing docker build is exiting with non-zero code.
One way around to generate report even when testcases are failed
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
echo "Copy allure-results into Jenkins container"
rm -rf reports
docker create -it --name $CONTAINER_NAME $IMAGE_NAME /bin/bash
docker cp $CONTAINER_NAME:my-project/reports/allure-results ./reports
docker rm -f $CONTAINER_NAME
You can user report copy part in Jenkins pipeline in post stage under always block, so that whether build pass or fail you can always get reports.
I found a solution to this issue:
added at the end of the RUN command - exit 0

Is there a way to automatically backup your database when issuing docker down command

Tried to search the Docker documentation, however, I cannot find anything that directly relates to a backup on the down command. Additionally, I see you can add your own command script in the yml on up, so I was hoping that there might be something similar for down?
You need to make your own entrypoint script that will create an exit hook. You can see more details on the steps of building custom image with a custom entrypoint in this SO.
In your case, the entrypoint will look like this:
#!/bin/bash
set -e
execute_on_finish() {
echo "Execute on finish"
}
trap execute_on_finish EXIT
echo "CALLING ENTRYPOINT WITH CMD: $#"
exec /old_entrypoint.sh "$#" &
daemon_pid=$!
wait $daemon_pid
execute_on_finish
Note
Since the backup process is a long operation, and docker will execute a kill if the process doesn't shut-down in 10s, you will need to send option to the stop not to kill the container with -t. See more details here

return from docker-compose up in jenkins

I have base image with Jboss copied on it. Jboss is started with a script and takes around 2 minutes.
In my Dockerfile I have created a command.
CMD start_deploy.sh && tail -F server.log
I do a tail to keep the container alive otherwise "docker-compose up" exits when script finishes and container stops.
The problem is when I do "docker-compose up" through Jenkins the build doesn't finishes because of tail and I couldn't start the next build.
If I do "docker-compose up -d" then next build starts too early and starts executing tests against the container which hasn't started yet.
Is there a way to return from docker-compose up when server has started completely.
Whenever you have chained commands or piped commands (|), it is easier to:
easier wrap them in a script, and use that script in your CMD directive:
CMD myscript
or wrap them in an sh -c command:
sh -c 'start_deploy.sh && tail -F server.log'
(but that last one depends on the ENTRYPOINT of the image.
A default ENTRYPOINT should allow this CMD to work)

Does 'docker start' execute the CMD command?

Let's say a docker container has been run with 'docker run' and then stopped with 'docker stop'. Will the 'CMD' command be executed after a 'docker start'?
I believe #jripoll is incorrect, it appears to run the command that was first run with docker run on docker start too.
Here's a simple example to test:
First create a shell script to run called tmp.sh:
echo "hello yo!"
Then run:
docker run --name yo -v "$(pwd)":/usr/src/myapp -w /usr/src/myapp ubuntu sh tmp.sh
That will print hello yo!.
Now start it again:
docker start -ia yo
It will print it again every time you run that.
Same thing with Dockerfile
Save this to Dockerfile:
FROM alpine
CMD ["echo", "hello yo!"]
Then build it and run it:
docker build -t hi .
docker run -i --name hi hi
You'll see "hello yo!" output. Start it again:
docker start -i hi
And you'll see the same output.
When you do a docker start, you call api/client/start.go, which calls:
cli.client.ContainerStart(containerID)
That calls engine-api/client/container_start.go:
cli.post("/containers/"+containerID+"/start", nil, nil, nil)
The docker daemon process that API call in daemon/start.go:
container.StartMonitor(daemon, container.HostConfig.RestartPolicy)
The container monitor does run the container in container/monitor.go:
m.supervisor.Run(m.container, pipes, m.callback)
By default, the docker daemon is the supervisor here, in daemon/daemon.go:
daemon.execDriver.Run(c.Command, pipes, hooks)
And the execDriver creates the command line in daemon/execdriver/windows/exec.go:
createProcessParms.CommandLine, err = createCommandLine(processConfig, false)
That uses the processConfig.Entrypoint and processConfig.Arguments in daemon/execdriver/windows/commandlinebuilder.go:
// Build the command line of the process
commandLine = processConfig.Entrypoint
logrus.Debugf("Entrypoint: %s", processConfig.Entrypoint)
for _, arg := range processConfig.Arguments {
logrus.Debugf("appending %s", arg)
if !alreadyEscaped {
arg = syscall.EscapeArg(arg)
}
commandLine += " " + arg
}
Those ProcessConfig.Arguments are populated in daemon/container_operations_windows.go:
processConfig := execdriver.ProcessConfig{
CommonProcessConfig: execdriver.CommonProcessConfig{
Entrypoint: c.Path,
Arguments: c.Args,
Tty: c.Config.Tty,
},
, with c.Args being the arguments of a Container (runtile parameters or CMD)
So yes, the 'CMD' commands are executed after a 'docker start'.
If you would like your container to run the same executable every time, then you should consider using ENTRYPOINT in combination with CMD.
Note: don’t confuse RUN with CMD. RUN actually runs a command and commits the result; CMD does not execute anything at build time, but specifies the intended command for the image.
https://docs.docker.com/engine/reference/builder/
No, the CMD command only executed when you execute 'docker run' to run a container based in a image.
In the documentation:
When used in the shell or exec formats, the CMD instruction sets the command to be executed when running the image.
https://docs.docker.com/reference/builder/#cmd

Resources