In Travis CI is it possible to run the build process from inside a docker container?
In GitLab CI this is the default. We can simply define the image in .gitlab-ci.yml then all the build/test/deploy will run inside that container. However, Travis seems to have totally different view about docker usage. How can I achieve a similar behavior in Travis?
It turns out this is easier to do with Travis-CI than it first appears. All you have to do is write your normal build script using docker exec calls. Doing some of the trickier third-party service integrations may require dedicated shell scripts, as in the codecov.io example below.
Example:
sudo: required
language: cpp
services:
- docker
before_install:
- docker pull user/build:latest
- docker run -it -d --name build user/build bash
- docker exec build git clone https://github.com/user/product.git
script:
- docker exec build cmake -H/product -B/_build
- docker exec build cmake --build /_build
- docker exec build cmake --build /_build --target documentation
- docker exec build cmake --build /_build --target run-tests
after_success:
- docker exec build bash /project/codecov.sh
codecov.sh:
#!/usr/bin/env bash
cd /project && \
bash <(curl -s https://codecov.io/bash) \
-f /_build/app.coverage.txt \
-t uuid-project-token \
-X gcov \
-X coveragepy \
-X search \
-X xcode \
-R /project \
-F unittests \
-Z
A real-life project using this technique can be found here: https://github.com/qbradq/tales-of-sosaria/tree/e28eb9877fd7071adae9ab03f40a82ea8317a7df
And I wrote an article about the whole process here: https://normanblancaster.wordpress.com/2017/01/31/leading-edge-c-build-environments-with-docker-and-travis-ci/
Related
We are trying to store the container names in my Makefile but I see below error when executing the build, someone please advise. Thanks.
.PHONY: metadata
metadata: .env1
docker pull IMAGE_NAME
docker run $IMAGE_NAME;
ID:= $(shell docker ps --format '{{.Names}}')
#echo ${ID}
docker cp ${ID}:/app/.env .env2
Container names are not shown in below "ID" Variable when executing the makefile from Jenkins
ID:=
/bin/sh: ID:=: command not found
There are a couple of things you can do in terms of pure Docker mechanics to simplify this.
You can specify an alternate command when you docker run an image: anything after the image name is taken as the image to run. For instance, you can cat the file as the main container command, and replace everything you have above as:
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --rm \
-e "ARTIFACTORY_USER=${ARTIFACTORY_CREDENTIALS_USR}" \
-e "ARTIFACTORY_PASSWORD=${ARTIFACTORY_CREDENTIALS_PSW}" \
--env-file .env1 \
"${ARTIFACTDATA_IMAGE_NAME}" \
cat /app/.env \
> $#
(It is usually better to avoid docker cp, docker exec, and other imperative-type commands; it is fairly inexpensive and better practice to run a new container when you need to.)
If you can't do this, you can docker run --name your choice of names, and then use that container name in the docker cp option.
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --name getmetadata ...
docker cp getmetadata:/app/.env $#
docker stop getmetadata
docker rm getmetadata
If you really can't avoid this at all, each line of the Makefile runs in a separate shell. On the one hand this means you need to join together lines if you want variables from one line to be visible in a later line; on the other, it means you have normal shell functionality available and don't need to use the GNU Make $(shell ...) extension (which evaluates when the Makefile is loaded and not when you're running the command).
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
# Note here:
# $$ escapes $ for the shell
# Multiple shell commands joined together with && \
# Beyond that, pure Bourne shell syntax
ID=$$(docker run -d ...) && \
echo "$$ID" && \
docker cp "$$ID:/app/.env" "$#"
I have created a docker container that runs a command line tool. The container is supposed to be interactive. Am I somehow able to specify in the Dockerfile that the container is always started in interactive mode?
For reference this is the dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install curl
RUN mkdir adr-tools && \
cd adr-tools && \
curl -L https://github.com/npryce/adr-tools/archive/2.2.0.tar.gz --output adr-tools.tar.gz && \
tar -xvzf adr-tools.tar.gz && \
cp */src/* /usr/bin && \
rm -rf adr-tools
CMD ["/bin/bash"]
EDIT:
I know of the -it options for the run command. I'm explicitly asking for a way to do this in the docker file.
EDIT2:
This is not a duplicate of Interactive command in Dockerfile since my question addresses an issue with how arguments specified to docker run can be avoided in favor of specifying them in the Dockerfile whereas the supposed duplicate addresses an issue of interactive input during the build of the image by docker itself.
Many of the docker run options can only be specified at the command line or via higher-level wrappers (shell scripts, Docker Compose, Kubernetes, &c.). Along with port mappings and network settings, the “interactive” and “tty” options can only be set at run time, and you can’t force these in the Dockerfile.
You can use the docker run command.
docker build -t curly .
docker run -it curly curl https://stackoverflow.com
The convention is:
docker run -it IMAGE_NAME [COMMAND] [ARG...]
Where [COMMAND] is curl and [ARG...] are the curl arguments, which is https://stackoverflow.com in my example.
-i enables interactive process mode. You can't specify this in the Dockerfile.
-t allocates a pseudo-TTY for the container.
Are you looking for the -it option?
From the Docker documentation:
For interactive processes (like a shell), you must use -i -t together
in order to allocate a tty for the container process.
So, for example you can run it like:
docker run -it IMAGE_NAME [COMMAND] [ARG...]
Actually, in Ubuntu, I am running Apache Server in the background.
But for you, Try with below command and you should be able to go inside docker container.
docker exec -i -t your_container_name bash
While building up a docker image through a dockerfile, I have to clone a github repo. I added my public ssh keys to my git hub account and I am able to clone the repo from my docker host. While I see that I can use docker host's ssh key by mapping $SSH_AUTH_SOCK env variable at the time of docker run like
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
How can I do the same during a docker build?
For Docker 18.09 and newer
You can use new features of Docker to forward your existing SSH agent connection or a key to the builder. This enables for example to clone your private repositories during build.
Steps:
First set environment variable to use new BuildKit
export DOCKER_BUILDKIT=1
Then create Dockerfile with new (experimental) syntax:
# syntax=docker/dockerfile:experimental
FROM alpine
# install ssh client and git
RUN apk add --no-cache openssh-client git
# download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# clone our private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
And build image with
docker build --ssh default .
Read more about it here: https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
Unfortunately, you cannot forward your ssh socket to the build container since build time volume mounts are currently not supported in Docker.
This has been a topic of discussion for quite a while now, see the following issues on GitHub for reference:
https://github.com/moby/moby/issues/6396
https://github.com/moby/moby/issues/14080
As you can see this feature has been requested multiple times for different use cases. So far the maintainers have been hesitant to address this issue because they feel that volume mounts during build would break portability:
the result of a build should be independent of the underlying host
As outlined in this discussion.
This may be solved using an alternative build script. For example you may create a bash script and put it in ~/usr/local/bin/docker-compose or your favourite location:
#!/bin/bash
trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
/usr/bin/docker-compose $#
Then in your Dockerfile you would use your existing ssh socket:
...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
&& apk add --no-cache socat openssh \
&& /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
&& bundle install \
...
or any other ssh commands will works
Now you can call our custom docker-compose build. It would call the actual docker script with a shared ssh socket.
This one is also interesting:
https://github.com/docker/for-mac/issues/483#issuecomment-344901087
It looks like:
On the host
mkfifo myfifo
nc -lk 12345 <myfifo | nc -U $SSH_AUTH_SOCK >myfifo
In the dockerfile
RUN mkfifo myfifo
RUN while true; do \
nc 172.17.0.1 12345 <myfifo | nc -Ul /tmp/ssh-agent.sock >myfifo \
done &
RUN export SSH_AUTH_SOCK=/tmp/ssh-agent.sock
RUN ssh ...
Travis Ci .yml file
sudo: true
language: cpp
compiler:
- g++
services:
- docker
before_install:
- docker run -it ubuntu bash
- apt-get install graphicsmagick
install:
- apt-get install qt5-default
- exit
script: "bash -c ./build.sh"
build.sh is just a simple make file.
Can someone explain the difference between running.
docker run -it ubuntu bash
docker run -it ubuntu /bin/bash
To answer your question:
docker run -it ubuntu bash
executes the first binary called bash in the container's $PATH
docker run -it ubuntu /bin/bash
executes the bash binary in the /bin/ directory specifically.
For the ubuntu container both forms are very likely functionally equivalent.
To answer what I think could be your actual problem:
You're not using docker as is intended. Your script section does not execute in the container for example. You need to run all the commands, probably as a script, with a docker run without the interactive flag.
I saw some blog posts where people talk about JMeter and Docker. I understand that Docker will be helpful for setting up a container with all the dependencies. But they all run/create the containers in the same host. So ideally all the containers will share the host resources. It is like you run multiple instances of jmeter in the same host. It will not be helpful to generate more load.
When a host has 12GB RAM, I think 1 instance of JMeter with 10GB heap can generate more load than running 10 containers with 1 jmeter instance in each container.
What is the point of running docker here?
I made an automatic solution that can be easily integrated with Jenkins.
The dockerfile should be extended from java8 and add the JMeter build. This Docker image I will call jmeter-base:
FROM java:8
RUN mkdir /jmeter \
&& cd /jmeter/ \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-3.3.tgz \
&& tar -xvzf apache-jmeter-3.3.tgz \
&& rm apache-jmeter-3.3.tgz
ENV JMETER_HOME /jmeter/apache-jmeter-3.3/
# Add Jmeter to the Path
ENV PATH $JMETER_HOME/bin:$PATH
If you want to use a master-slave solution, this is the jmeter master Dockerfile:
FROM jmeter-base
WORKDIR $JMETER_HOME
# Ports to be exposed from the container for JMeter Master
RUN mkdir scripts
EXPOSE 60000
And this is the jmeter slave Dockerfile:
FROM jmeter-base
# Ports to be exposed from the container for JMeter Slaves/Server
EXPOSE 1099 50000
# Application to run on starting the container
ENTRYPOINT $JMETER_HOME/bin/jmeter-server \
-Dserver.rmi.localport=50000 \
-Dserver_port=1099
Now, with the both images, you should execute a script to execute you should know all slave IPs. This script make all the job:
#!/bin/bash
COUNT=${1-1}
docker build -t jmeter-base jmeter-base
docker-compose build && docker-compose up -d && docker-compose scale master=1 slave=$COUNT
SLAVE_IP=$(docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) | grep slave | awk -F' ' '{print $2}' | tr '\n' ',' | sed 's/.$//')
WDIR=`docker exec -it master /bin/pwd | tr -d '\r'`
mkdir -p results
for filename in scripts/*.jmx; do
NAME=$(basename $filename)
NAME="${NAME%.*}"
eval "docker cp $filename master:$WDIR/scripts/"
eval "docker exec -it master /bin/bash -c 'mkdir $NAME && cd $NAME && ../bin/jmeter -n -t ../$filename -R$SLAVE_IP'"
eval "docker cp master:$WDIR/$NAME results/"
done
docker-compose stop && docker-compose rm -f
I came to understand from this post from a friend of mine that we should not be running multiple docker containers in the same host to generate more load.
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker/
Instead the usage of docker here is to quickly setup the jmeter environment.