Docker Compose: Detect whether image needs to be re-built - docker

I'm trying to create a Docker setup (using docker-compose) to test one of my Python applications during development. It the docker-compose.yml starts up a Postgres Server, a Redis server and a PhantomJS server and then runs the tests using pytest.
This is what my test.sh looks like:
#!/bin/bash
UP=$(docker-compose up -d redis postgres phantomjs 2>&1)
echo $UP
if [[ $UP == *"Starting radar_postgres"* ]]; then
echo "Sleeping 10 seconds to wait for PostgreSQL server..."
sleep 10
fi
docker-compose build tests && \
docker-compose run \
--rm \
-e GOOGLE_OAUTH2_CLIENT_ID='$GOOGLE_OAUTH2_CLIENT_ID' \
-e GOOGLE_OAUTH2_CLIENT_SECRET='$GOOGLE_OAUTH2_CLIENT_SECRET' \
-e GOOGLE_DEVELOPER_TOKEN='$GOOGLE_DEVELOPER_TOKEN' \
tests $#
First the dependencies are started. Due to the way docker-compose up works, they're automatically rebuilt when necessary.
Then I run a one-off job in my tests container. I use a one-off job instead of using docker-compose up because this way I can pass in arguments to the test framework.
The problem is that the container is always being rebuilt, even if the Dockerfile didn't change and no rebuild would be necessary. (Of course each step uses the cache, but it still takes 4-5 seconds.) On contrast, if I leave away the docker-compose build line, then the container is not being rebuilt when I change the Dockerfile.
Is there a way to rebuilt an image only if necessary?

Note that there is a discussion (issue 1455) to remove build from docker-compose.
Compose's primary job is orchestration and not building, and that the Docker image is the natural place to draw that line
So it would be best to use docker build commands (which should only build images when necessary), instead of using docker-compose build (which might build a bit too aggressively)

Related

docker container exits out immediately with a script attached

I'm trying to add a script to a docker run command , command i'm using is :
docker run -dit --name 1.4 ubuntu sh -c 'echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;'
and then install curl , then enter a website as input and it should reply to me as per the course i'm studying , but running this exact command makes the container exit immediately
any guidance on why would that be ?
also how should i send the input to the container so it can use it afterwards , do i just attach to it after installing curl in the terminal ?
I'm going to recommend an extremely different workflow from what you suggest. Rather than manually installing software and trying to type arguments into the stdin of a shell script, you can build this into a reusable Docker image and provide its options as environment variables.
In comments you describe a workflow where you first start a container, then get a debugging shell inside of it, and then install curl. Unless you're really truly debugging, this is a pretty unusual workflow: anything you install this way will get lost as soon as the container exits, and you'll have to repeat this step every time you re-run the container. Instead, create a new empty directory, and inside that create a file named Dockerfile (exactly that name, no extension, capital D) containing
# Start our new image from this base
FROM ubuntu
# Install any OS-level packages we need
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \ # avoid post-installation questions
apt-get install \
--no-install-recommends \ # don't install unneeded extra packages
--assume-yes \ # (-y) skip an "are you sure" prompt
curl
Rather than try to read from the container's input, you can take the URL as an environment variable. In most cases the best way to give the main command to a container is by specifying it in the Dockerfile. You can imagine running a larger script or program here as well, and it would take the some environment-variable setting (using Python's os.environ, Node's process.env, Ruby's ENV, etc.).
In our case, let's make the main container command be the single curl command that you're trying to run. We haven't specified the value of the environment variable yet, and that's okay: this shell command isn't evaluated until the container actually runs.
# at the end of the Dockerfile
CMD curl "$website"
Now let's build and run it. When we do launch the container, we need to provide that $website environment variable value, which we can do with a docker run -e option.
# Build the image:
docker build \
-t my/curl # giving it a name
. # using the content in the current directory
docker run \
--rm # deleting the container when done
-e website=https://stackoverflow.com \
my/curl # with the same name as above
So note that we're starting the container in the foreground (no -d option) since we want to see its output and we expect it to exit promptly; we're cleaning up the container when it's done; we're not trying to pass a full shell script as a command-line argument; and we are providing our options on the command line, so we don't need to make the container's stdin work (no -i or -t option).
A Docker container is a wrapper around a single process. When that process exits, the container exits too. In this example, the thing you want the container to do is run a curl command; that's not a long-running process, hence docker run --rm but not -d. There's not an "afterwards" here, if you need to query a different Web site then launch a new container. It's very normal to destroy and recreate containers, especially since there are many options that can only be specified when you first start a container.
With the image and container we've built here, in fact, it's useful to think about them as analogous to the /usr/bin/curl binary on your host. You build it once into a reusable artifact (here the Docker image), and you run multiple instances of it (curl commands or new Docker containers) giving options on the command line at startup time. You do not typically "get a shell" inside a curl command-line invocation, and I'd similarly avoid docker exec outside of debugging tasks.
You can also use alpine/curl image to use curl command without needing to install anything.
First start the container in detached mode with -d flag.
Then run your script with exec sub command.
docker run -d --name 1.4 alpine/curl sleep 600
docker exec -it 1.4 sh -c 'echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;'

How to configure Docker run as build step in teamcity?

I'm a beginner in docker, as well as in team city, I set up a pipeline for a build of a docker container and wanted to configure it to run after a successful build, I tried to use a step with a docker, but they advise using the command line with executable parameter and some way with docker socket, I crossed the Internet / YouTube did not see normal examples for starting a container after a build. I saw some examples of launching with agents, but again I did not understand anything in what was written, I looked for examples on YouTube, I also did not find it. Please give an example of running docker as a step in the pipeline on Linux.
I solved my similar requirement on Jenkins by applying following..
Add a shell file (e.g. run.sh) in your project. In there have the docker run command that you will use from command line adding > /dev/null 2>&1 & at the end so that the process can be run in background and O/P streams to null.
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag > /dev/null 2>&1 &
Then in your Jenkins (Teamcity) script add a sh step to run this file
steps {
dir (whatever-dir-run.sh-is-in) {
sh "JENKINS_NODE_COOKIE=dontKillMe sh run.sh"
}
}
Note: If JENKINS_NODE_COOKIE has an equivalent in Teamcity, use that.

Can I use docker-compose without a bash script to inspect a container then copy files to host from the container?

This is the workflow I'm trying to parallelize:
docker build
docker run
docker inspect container -> returns false
docker cp container:file /host/
I was wanting to use docker-compose to do this on a single host, then transition to Kubernetes later so that I can orchestrate this on multiple hosts.
Should I create a bash script and have it RUN in the Dockerfile?
I'm looking for a solution that the community accepts as the best practice.
In single-host-Docker land, you can try to arrange things so that docker run does everything you need it to. Avoid docker inspect (which dumps out low-level diagnostics that usually aren't interesting) and docker cp. Depending on how you build things, you could build the artifact in your Dockerfile, and copy it out
docker build -t the-image .
# use "cat" to get it out?
docker run --rm the-image \
cat /app/file \ # from the container
> file # via the container's stdout, to the host
# use volumes to get it out?
docker run --rm -v $PWD:/host the-image \
cp /app/file /host
Depending on what you're building, you might extend this further to pass both the inputs and outputs in volumes, so the image is just a toolchain. For a minimal Go application, using the Docker Hub golang image, for example:
# don't docker build anything, but run a container with the toolchain
docker run --rm \
-v $PWD:/app \
-w /app \
golang:1.15 \
go build -o the_app ./cmd/the_app
In this last setup the -w working directory is the bind-mounted /app directory, so go build -o ./the_app writes out to the host.
Since this setup is a little more oriented towards single-use containers (note the docker run --rm option) it's not a good match for Compose, which generally expects long-running server-type containers.
This setup also will not translate well to Kubernetes. Kubernetes isn't really good at sharing files between containers or with systems outside the cluster. If you happen to have an NFS server you can use that, but there aren't native options; most of the volume types that it's straightforward to get are ReadWriteOnce volumes that can't be reused between multiple Kubernetes Pods (containers).
You could in principle write a Kubernetes Job that did a single compilation. It can't run docker build, so the "run" step would have to do the actual building. You can't kubectl cp out of a completed pod (see e.g. kubernetes/kubectl#454), so it needs to send its content somewhere specific when it's done.
A better high-level approach here would be to find or install some sort of network-accessible storage, especially to hold the results (an Artifactory server; object storage like AWS S3). Rewrite your build sequence as a "task" that takes the locations of the inputs and outputs and runs the build, ignoring the local filesystem. Set up a job queue like RabbitMQ, and inject the tasks into the queue. Finally, run the builder-worker as a Kubernetes Deployment; it will build as many things in parallel as the number of replicas: in the deployment.

Compiling cpp files in docker container failing when run directly but OK if using interactive container

I've created a docker image with all the modules required for our build environment. If I start a container in interactive mode, I can build fine.
docker run -v <host:container> -w my_working_dir -it my_image
$make -j16
But if I try to do this from a command line I get compile failures (well into the process)
docker run -v <host:container> -w my_working_dir my_image bash -c "make -j16"
Also if I run the container detached and use docker exec I also get compile failures (same point)
docker run -v <host:container> -t --detach --name star_trek my_image
docker exec star_trek bash -c "cd my_working_dir; make -j16"
Entering an interactive session with the detached container also seems seems to pass though I though I have seen this fail as well.
docker exec -it star_trek_d bash
$make -j16
This will be part of an automated build system so I need to be able run this without user intervention.
I'm not sure why these are behaving differently but I ran multiple combination and the only way I've been able to get a success build is through the interactive method above. Other then the interactive system having more of a logged in user configuration, what is the difference between running interactive or passing on command line?
My preferred method would to be run the container detached so I can send several sequential commands as we have a complex build and test process but if I have to spin the container up each time I'm OK with that as this point because I really need to get this running like last week.
*Commands are pseudo-code and simplified to aid visibility and using bash -c because I'm needing to run a script for our test and therefore doing something like bash -c "my_script.sh; run_test"
UPDATE - We need custom paths for our build tools. I believe this is not working except in the interactive session. Our /etc/bashrc file is used to build the correct path and export it. When I do a docker run I've tried running a script that does a "source /etc/bashrc", among other initialization things we need, before doing the make but this doesn't seem to work. Note have to pipe in password as this needs to be run using sudo. The other commands seem to work fine.
bash -c 'echo su_password | sudo -S /tmp/startup.sh; make -j16'
I've also tried to just set on command without success
bash -c 'export <path>; make -j16'
What is the best way to set the path in the container so installed applications can be found? I don't want to hard code them in the dockerfile but will at this point if I must.
I have this working. As our path is very long I set it to a variable and was passing it in on the command line. Seems this was causing issues.
export PATH=$PATH/...
vs
export PATH=$PATH:/...
Now I am just specifying the whole path each time and everything is working.
bash -c 'export PATH=$PATH/<dir>/<program>/bin:/<dir>/<program>/bin:...; make -j16'

Bitbucket pipelines/Docker : Connection refused

I am trying to configure a bitbucket CI pipeline to run tests.Stripping out the details I have a make file which looks as follows to run some form of integration tests.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME}
godog
docker-compose -f ${DOCKER_COMPOSE_FILE} down
Docker compose is a single webserver with ports exposed.
Pipeline looks as follows:
- step: &integration-testing
name: Run integration tests script: # do this to make go module work with private repo
- apk add libc-dev py-pip python-dev libffi-dev openssl-dev gcc libc-dev make bash
- pip install docker-compose - git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/onsi/gomega/...
- go get github.com/DATA-DOG/godog/cmd/godog
- make build-only && make test-e2e
I am facing two separate issues for both i have not been able to find a solution.
Keep getting connection refused when the tests are run.
To elaborate above, the docker compose brings up a server with proper host:port mapping ("127.0.0.1:10077:10077"). The command godog is intended to run the tests by querying the server. This however always ends in connection refused.This link has a possible solution , so i am exploring that.
The pipeline almost always runs commands before the container is up. I've tried fixing this by changing the invoke to.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && sleep 10 && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
However the container is always brought up after the sleep (almost instantaneously).
Example:
Creating oracle-go ...
Sleep 10
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker exec -i oracle-go godog
Creating oracle-go ... done
Error response from daemon: Container 7bab5322203756b972e7f0a3c6e5827413279914a68c705221b8af7daadc1149 is not running
Please let me know if there is a way around it.
If I understood your question correctly, you want to wait for the server to start before running tests.
Instead of manually sleeping, you should use wait-for-it.sh (or an alternative). See the relevant Docker docs for more information.
For example:
test-e2e:
bash wait-for-it.sh <HOST>:<PORT> -- docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
Change <HOST> and <PORT> to your service's host name and port respectively. Alternatively, you could use wait-for-it.sh in your Docker Compose command or the like.

Resources