Docker port binding using gitlab-ci with gitlab-runner - docker

I've noticed a problem, when configuring my gitlab-ci and gitlab-runner.
I want to have few separate application environments on one server, running on other external ports, but using same docker image.
What I want to achieve
deploy-dev running Apache at port 80 in container, but at external port 81
deploy-rcrunning Apache at port 80 in container, but on external port 82
I've seen that docker run has --publish argument, that allows port binding, like 80:81, but unfortunately I can't find any option in gitlab-ci.yml or gitlab-runner's config.toml to set that argument.
Is there any way to achieve port binding in Docker ran by gitlab-runner?
My gitlab-ci.yml:
before_script:
# Install dependencies
- bash ci/docker_install.sh > /dev/null
deploy:
image: webdevops/php-apache:centos-7-php56
stage: deploy
only:
- dockertest
script:
- composer self-update
- export SYMFONY_ENV=dev
- composer install
- app/console doc:sch:up --force
- app/console doc:fix:load -e=dev -n
- app/console ass:install
- app/console ass:dump -e=dev
tags:
- php

You're confusing two concepts: Continuous integration tasks, and docker deployment.
What you have configured is a continuous integration task. The idea is that these perform build steps and complete. Gitlab-ci will record the results of each build step and present it back to you. These can be docker jobs themselves, though they don't have to be.
What you want to do is deploy to docker. That is to say you want to start a docker job that contains your program. Going through this is probably beyond the scope of a stack overflow answer, but I'll try my best to outline what you need to do.
First take what you have a script already, and turn this into a dockerfile. Your dockerfile will need to add all the code in your repo, and then perform the composer / console scripts you list. Use docker build to turn this dockerfile into a docker image.
Next (optionally) you can upload the the docker image to a repository.
The final step is to perform a docker run command that loads up your image and runs it.
This sounds complicated, but it's really not. I have a ci pipeline that does this. One step runs: docker build ... forllowed by docker push ... and the next step runs docker run ... to spawn the new container.

Related

How to setup Docker in Docker (DinD) on CloudBuild?

I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.

Cannot connect to the Docker daemon at unix:///var/run/docker.sock in gitlab CI

I looked at any other questions but can't find my own solution! I setting up a CI in gitlab and use the gitlab's shared runner. In build stage I used docker image as base image but when i use docker command it says :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I looked at this topic but still don't understand what should I do?
.gitlab-ci.yml :
stages:
- test
- build
- deploy
job_1:
image: python:3.6
stage: test
script:
- sh ./sh_script/install.sh
- python manage.py test -k
job_2:
image: docker:stable
stage: build
before_script:
- docker info
script:
- docker build -t my-docker-image .
I know that the gitlab runner must registered to use docker and share /var/run/docker.sock! But how to do this when using the gitlab own runner?
Ahh, that's my lovely topic - using docker for gitlab ci. The problem you are experiencing is better known as docker-in-docker.
Before configuring it, you may want to read this brilliant post: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
That will give you a bit of understanding what is the problem and which solution best fits you. Generally there are 2 major approaches: actual installation of docker daemon inside docker and sharing host's daemon to containers. Which approach to choose - depends on your needs.
In gitlab you can go in several ways, I will just share our experience.
Way 1 - using docker:dind as a service.
It is pretty simple to setup. Just add docker:dind as a shared service to your gitlab-ci.yml file and use docker:latest image for your jobs.
image: docker:latest # this sets default image for jobs
services:
- docker:dind
Pros:
simple to setup.
simple to run - your source codes are available by default to your job in cwd because they are being pulled directly to your docker runner
Cons: you have to configure docker registry for that service, otherwise you will get your Dockerfiles built from scratch each time your pipeline starts. As for me, it is unacceptable, because can take more than an hour depending on the number of containers you have.
Way 2 - sharing /var/run/docker.sock of host docker daemon
We setup our own docker executor with docker daemon and shared the socket by adding it in /etc/gitlab-runner/config.toml file. Thus we made our machine's docker daemon available to docker cli inside containers. Note - you DONT need privileged mode for executor in this case.
After that we can use both docker and docker-compose in our custom docker images. Moreover, we dont need special docker registry because in this case we share executor's registry among all containers.
Cons
You need to somehow pass sources to your containers in this case, because you get them mounted only to docker executor, but not to containers, launched from it. We've stopped on cloning them with command like git clone $CI_REPOSITORY_URL --branch $CI_COMMIT_REF_NAME --single-branch /project

Gitlab CI docker-in-docker deployment not running commands inside of the container

I am trying to set up a new build pipeline for one of our projects. In a first step I am building a new docker image for successive testing. This step works fine. However, when the test jobs are executed, the image is pulled, but the commands are running on the host instead of the container.
Here's the contents of my gitlab-ci.yml:
stages:
- build
- analytics
variables:
TEST_IMAGE_NAME: 'registry.server.de/testimage'
build_testing_container:
stage: build
image: docker:stable
services:
- dind
script:
- docker build --target=testing -t $TEST_IMAGE_NAME .
- docker push $TEST_IMAGE_NAME
mess_detection:
stage: analytics
image: $TEST_IMAGE_NAME
script:
- vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
paths:
- mess_detection.html
expire_in: 1 week
when: always
except:
- production
allow_failure: true
What do I need to change to make gitlab runner execute the script commands inside the container it's successfully pulling?
UPDATE:
It's getting even more interesting:
I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project.
However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.
UPDATE2:
After some more research, I learned that gitlab executes four sub-steps for each build step:
After some more research, I found that gitlab runs 4 sub-steps for each build step:
Prepare : Create and start the services.
Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
Build : User build. This is run on the user-provided docker image.
Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.
It seems like in my case, step 3 isn't executed properly and the command is still running inside the gitlab runner docker image.
UPDATE3
In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it's not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.
this is the usual behavior The image keyword is the name of the Docker image the Docker executor will run to perform the CI tasks.
you can use The services keyword which defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
access can be done by a script or entry-points for example :
in the docker file of the image you are going to build add a script that you want to execute like that :
ADD exemple.sh /
RUN chmod +x exemple.sh
then you can add the image as a service in gitlab-ci and the script would change to :
docker exec <container_name> /exemple.sh
this will run a script inside the container or specify an entrypoint to the docker image and then the script would be :
docker exec <container> /bin/sh -c "cmd1;cmd2;...;cmdn"
here's a reference :
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html

gitlab-runner - deploy docker image to a server

I can setup a gitlab-runner with docker image as below:-
stages:
- build
- test
- deploy
image: laravel/laravel:v1
build:
stage: build
script:
- npm install
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan storage:link
test:
stage: test
script: echo "Running tests"
deploy_staging:
stage: deploy
script:
- echo "What shall I do?"
environment:
name: staging
url: https://staging.example.com
only:
- master
It can pass the build stage and test stage, and I believe a docker image/container is ready for deployment. From Google, I observe it may use "docker push" to proceed next step, such as push to AWS ECS, or somewhere of Gitlab. Actually I wish to understand, if I can push it directly to another remote server (e.g. by scp)?
A docker image is a combination of different Layers which are being built when you use the docker build command. It reuses existing Layers and gives the combination of layers a name, which is your image name. They are usually stored somewhere in /var/lib/docker.
In general all necessary data is stored on your system, yes. But it is not advised to directly copy these layers to a different machine and I am not quite sure if this would work properly. Docker advises you to use a "docker registry". Installing an own registry on your remote server is very simple, because the registry can also be run as a container (see the docs).
I'd advise you to stick to the proposed solution's from the docker team and use the public DockerHub registry or an own registry if you have sensitive data.
You are using GitLab. GitLab provides its own registry. You can push your images to your own Gitlab registry and pull it from your remote server. Your remote server only needs to authenticate against your registry and you're done. GitLab's CI can directly build and push your images to your own registry on each push to the master-branch, for example. You can find many examples in the docs.

TravisCI/Docker: parameterized start of docker containers with matrix feature

I have a software that should be tested against a serious of WebDAV backends that are available as Docker containers. The lame approach is to start all containers within the before_install section like
before_install:
- docker run image1
- docker run image2
- ...
This does not make much sense and wastes system resource since I only need to have only on particular docker container running as part of test run.
My test configuration uses a matrix...it is possible to configure the docker image to be run using an environment variable as part of the matrix specs?
This boils down to two questions:
can I use environment variables inside steps of the before_install section
is the 'matrix' evaluated before the before_install section in order to make
use of environment variables defined inside the matrix
?
The answer to both of your questions is yes.
I have been able to build indepentant dockerfiles using the matrix configuration. A sample dockerfile might look like
sudo: required
services:
- docker
env:
- DOCKERFILE=dockerfile-1
- DOCKERFILE=dockerfile-2
before_install:
- docker build -f $DOCKERFILE .
In this case there would be two independant runs each building a separate image. You could also use a docker pull command if your images are on the docker hub.

Resources