TravisCI/Docker: parameterized start of docker containers with matrix feature - docker

I have a software that should be tested against a serious of WebDAV backends that are available as Docker containers. The lame approach is to start all containers within the before_install section like
before_install:
- docker run image1
- docker run image2
- ...
This does not make much sense and wastes system resource since I only need to have only on particular docker container running as part of test run.
My test configuration uses a matrix...it is possible to configure the docker image to be run using an environment variable as part of the matrix specs?
This boils down to two questions:
can I use environment variables inside steps of the before_install section
is the 'matrix' evaluated before the before_install section in order to make
use of environment variables defined inside the matrix
?

The answer to both of your questions is yes.
I have been able to build indepentant dockerfiles using the matrix configuration. A sample dockerfile might look like
sudo: required
services:
- docker
env:
- DOCKERFILE=dockerfile-1
- DOCKERFILE=dockerfile-2
before_install:
- docker build -f $DOCKERFILE .
In this case there would be two independant runs each building a separate image. You could also use a docker pull command if your images are on the docker hub.

Related

How to setup Docker in Docker (DinD) on CloudBuild?

I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.

Kubernetes container args behave incorrect

I want to use Kubernetes and the postman/newman Docker image to execute my API tests.
Locally, I can execute the image with
docker run postman/newman run <url-to-collection> --env-var baseUrl=<local-hostname>
I include the Image in a Kubernetes manifest file
spec:
containers:
- name: newman
image: postman/newman:latest
args:
- run
- '<url-to-collection>'
- --env-var baseUrl=<kubernetes-hostname>
When I apply the manifest and look at the logs of the container, I get the following error:
error: unknown option '--global-var baseUrl=<kubernetes-hostname>'
I tried out many things with quotes and using the command section instead of the args section, but always with the same result.
I figure that Kubernetes somehow builds the command in a way, that the newman executable can not understand it.
However I could not find any info about that.
(I also created an issue in the GitHub repo of Newman here)
Could anybody explain to me where this problem comes from and how I might solve this?
Thanks anyways!
Linux commands are made up of a sequence of words. If you type a command at a shell prompt, it takes responsibility for splitting it into words for you, but in the Kubernetes args: list, you need to split out the words yourself.
args:
- run
- '<url-to-collection>'
# You want these as two separate arguments, so they need to be
# two separate list items
- --env-var
- baseUrl=<kubernetes-hostname>
If the two arguments are in the same list item, they are a single "word":
# /bin/ls "/app/some directory/some file"
command:
- /bin/ls
# a single argument, including embedded spaces
- /app/some directory/some file
The same basic rules apply for Docker Compose entrypoint: and command: and the JSON-syntax Dockerfile ENTRYPOINT and CMD directives, except that the Docker-native forms all also accept a plain string that they will split on spaces (using a shell in the Dockerfile case, but not in the Compose case).
In the docker run command you provide, the shell on your host system processes it first, so the --env-var option and baseUrl=... argument get split into separate words before they're passed into the container.

Why use label in docker-compose.yml, can't environment do the job?

I am learning docker now. I am trying to figure out what kind of problem Docker label can solve.
I can understand why use label in Dockerfile, e.g add build-related metadata, but I still don't get why using it in docker-compose.yml? What is the difference between using labels vs environment? I assume there will be different use cases but I just can't figure it out.
Can someone give me some practical example?
Thanks
docker-compose.yml is used by docker-compose utility to build and run the services which you have defined in docker-compose.yml
While working with docker-compose we can use two thing
docker-compose build this will build the services which is defined under docker-compose.yml but in order to run this services it has to have a image which is with docker-engine if you do docker image ls you find the images which is built up with the docker-compose and inspect it there you find a label which defines the metadata of that particular image.
docker-compose up this will run the services which is built up in docker-container build now this running container has to have some metadata like env this is set with enviroment in docker-compose.yml
P.S. :- This is my first answer in stack overflow. If you didn't get just give a comment I will try to explain my best.
Another reason to use labels in docker-compose is to flag your containers as part of this docker-compose suite of containers, as opposed to other purposes each docker image might get used for.
Here's an example docker-compose.yml that shares labels across two services:
x-common-labels: &common-labels
my.project.environment: "my project"
my.project.maintainer: "me#example.com"
services:
s1:
image: somebodyelse/someimage
labels:
<<: *common-labels
# ...
s2:
build:
context: .
image: my/s2
labels:
<<: *common-labels
# ...
Then you can do things like this to just kill this project's containers.
docker rm -f $(docker container ls --format "{{.ID}}" --filter "label=my.project.environment")
re: labels vs. environment variables
Labels are only available to the docker and docker-compose commands on your host.
Environment variables are also available at run-time inside the docker container.
LABEL can be utilized to embed as much metadata as possible about the Docker image, so to make it easier to work with.
Some main purposes of adding LABEL to a Docker image are:
As a documentation.
You can provide author, description, link to a usage instructions etc.
For versioning.
You can ensure that some new features even with the same latest tag will be applicable for certain versions, so might not broke some old existing features.
Any other metadata for programmatic access.
This page provides a guideline and the most common usages of Docker LABEL.

Docker: Where to store version of multiple images for docker-compose?

In my CI environment there are several build versions and I need to get a specific version for docker-compose:
registry.example.com/foo/core 0.1.3
registry.example.com/foo/core 0.2.2
... # multiple packages in several versions like this
The images are builded like this:
build:
stage: build
script:
...
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE:$VERSION
And they are pulled on deploy pipeline like this:
production:
stage: deploy
script:
- docker pull $CI_REGISTRY_IMAGE:$VERSION
I'm also using docker-compose to start all microservices from that images:
$ docker-compose up -d
But now that is my problem. Where can I store which version of each image should be used? Hardcoded it looks like this - but this will always start 0.2.2, although the deploy pipeline could pull a different version, like 0.1.3:
core:
container_name: core
image: 'registry.example.com/foo/core:0.2.2'
restart: always
links:
- 'mongo_live'
environment:
- ROOT_URL=https://example.com
- MONGO_URL=mongodb://mongo_live/example
But the version should be better set as a variable. So I would think that on deploy I have to store the current $VERSION-value somewhere. On running docker-compose the version value should be read to get the correct version as the latest version is not always the selected one.
If you pass it as a variable, you'd store it where ever you define your environment variables in your CI environment. You can also store the values in a .env which is described here.
Using the variable defined in the environment (or .env file) would have a line in the docker-compose.yml like:
image: registry.example.com/foo/core:${VERSION}
Personally, I'd take a different approach and let the registry server maintain the version of the image with tags. If you have 3 versions for dev, stage, and prod, you can build your registry.example.com/foo/core:0.2.2 and then tag that same image as registry.example.com/foo/core:dev. The backend image checksum can be referenced by multiple tags and not take up additional disk space on the registry server or the docker hosts. Then in your dev environment, you'd just do a docker-compose pull && docker-compose up -d to grab the dev image and spin it up. The only downside of this approach is that the tag masks which version of the image is currently being used as dev, so you'll need to track that some other way.
Tagging an image in docker uses the docker tag command. You would run:
docker tag registry.example.com/foo/core:${VERSION} registry.example.com/foo/core:dev
docker push registry.example.com/foo/core:${VERSION}
docker push registry.example.com/foo/core:dev
If the registry already has a tag for registry.example.com/foo/core:dev it gets replaced with the new tag pointing to the new image id.

Docker port binding using gitlab-ci with gitlab-runner

I've noticed a problem, when configuring my gitlab-ci and gitlab-runner.
I want to have few separate application environments on one server, running on other external ports, but using same docker image.
What I want to achieve
deploy-dev running Apache at port 80 in container, but at external port 81
deploy-rcrunning Apache at port 80 in container, but on external port 82
I've seen that docker run has --publish argument, that allows port binding, like 80:81, but unfortunately I can't find any option in gitlab-ci.yml or gitlab-runner's config.toml to set that argument.
Is there any way to achieve port binding in Docker ran by gitlab-runner?
My gitlab-ci.yml:
before_script:
# Install dependencies
- bash ci/docker_install.sh > /dev/null
deploy:
image: webdevops/php-apache:centos-7-php56
stage: deploy
only:
- dockertest
script:
- composer self-update
- export SYMFONY_ENV=dev
- composer install
- app/console doc:sch:up --force
- app/console doc:fix:load -e=dev -n
- app/console ass:install
- app/console ass:dump -e=dev
tags:
- php
You're confusing two concepts: Continuous integration tasks, and docker deployment.
What you have configured is a continuous integration task. The idea is that these perform build steps and complete. Gitlab-ci will record the results of each build step and present it back to you. These can be docker jobs themselves, though they don't have to be.
What you want to do is deploy to docker. That is to say you want to start a docker job that contains your program. Going through this is probably beyond the scope of a stack overflow answer, but I'll try my best to outline what you need to do.
First take what you have a script already, and turn this into a dockerfile. Your dockerfile will need to add all the code in your repo, and then perform the composer / console scripts you list. Use docker build to turn this dockerfile into a docker image.
Next (optionally) you can upload the the docker image to a repository.
The final step is to perform a docker run command that loads up your image and runs it.
This sounds complicated, but it's really not. I have a ci pipeline that does this. One step runs: docker build ... forllowed by docker push ... and the next step runs docker run ... to spawn the new container.

Resources