upgrade rancher service from .gitlab-ci.yml - docker

I have created a stack which contains one container (service) on rancher.
This container has been created from an image which is hosted on a gitlab-ci project registry.
I want to force rancher to download a new version of this image and upgrade container.
I want to do this from a .gitlab-ci.yml script.
Here is an extract of my .gitlab-ci.yml:
(Please note i have set RANCHER_ACCESS_KEY, RANCHER_SECRET_KEY, RANCHER_URL secrets variables in gitlab web interface)
deploiement:
stage: deploiement
tags: [dockerrunnertag]
image: tagip/rancher-cli
script:
- rancher --debug up -d --stack "mystack"
- rancher --debug up -d --force-upgrade --pull --stack "mystack" --confirm-upgrade app
My problem is that gitlab is automaticly copy my source code into this tagip/rancher-cli container.
This container is temporary. I just want to run it in order to fire an action on rancher server.
How can i disable this fetching source code feature ?
Thanks

Yes it is possible.
Simply add the GIT_STRATEGY variable to your deployment job.
variables:
GIT_STRATEGY: none
Reference: https://gitlab.com/gitlab-org/gitlab-ce/issues/21337

Related

How to setup Docker in Docker (DinD) on CloudBuild?

I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.

Cannot connect to the Docker daemon at unix:///var/run/docker.sock in gitlab CI

I looked at any other questions but can't find my own solution! I setting up a CI in gitlab and use the gitlab's shared runner. In build stage I used docker image as base image but when i use docker command it says :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I looked at this topic but still don't understand what should I do?
.gitlab-ci.yml :
stages:
- test
- build
- deploy
job_1:
image: python:3.6
stage: test
script:
- sh ./sh_script/install.sh
- python manage.py test -k
job_2:
image: docker:stable
stage: build
before_script:
- docker info
script:
- docker build -t my-docker-image .
I know that the gitlab runner must registered to use docker and share /var/run/docker.sock! But how to do this when using the gitlab own runner?
Ahh, that's my lovely topic - using docker for gitlab ci. The problem you are experiencing is better known as docker-in-docker.
Before configuring it, you may want to read this brilliant post: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
That will give you a bit of understanding what is the problem and which solution best fits you. Generally there are 2 major approaches: actual installation of docker daemon inside docker and sharing host's daemon to containers. Which approach to choose - depends on your needs.
In gitlab you can go in several ways, I will just share our experience.
Way 1 - using docker:dind as a service.
It is pretty simple to setup. Just add docker:dind as a shared service to your gitlab-ci.yml file and use docker:latest image for your jobs.
image: docker:latest # this sets default image for jobs
services:
- docker:dind
Pros:
simple to setup.
simple to run - your source codes are available by default to your job in cwd because they are being pulled directly to your docker runner
Cons: you have to configure docker registry for that service, otherwise you will get your Dockerfiles built from scratch each time your pipeline starts. As for me, it is unacceptable, because can take more than an hour depending on the number of containers you have.
Way 2 - sharing /var/run/docker.sock of host docker daemon
We setup our own docker executor with docker daemon and shared the socket by adding it in /etc/gitlab-runner/config.toml file. Thus we made our machine's docker daemon available to docker cli inside containers. Note - you DONT need privileged mode for executor in this case.
After that we can use both docker and docker-compose in our custom docker images. Moreover, we dont need special docker registry because in this case we share executor's registry among all containers.
Cons
You need to somehow pass sources to your containers in this case, because you get them mounted only to docker executor, but not to containers, launched from it. We've stopped on cloning them with command like git clone $CI_REPOSITORY_URL --branch $CI_COMMIT_REF_NAME --single-branch /project

gitlab-ci and rancher residual images

I am working with a gitlab-ci server (with registry enabled).
Each time something is merged on master branch, i am building a docker image which contains all my source code.
Next, this image is deployed on a rancher server.
Here is an extract of my .gitlab-ci.yml file:
variables:
CONTAINER_TEST_IMAGE: mypersonnalgitlabserver.com:5005/myname/myproject:$CI_BUILD_REF_NAME
...
build:
...
script:
- docker build --cache-from=$CONTAINER_TEST_IMAGE --file=Dockerfile --tag=$CONTAINER_TEST_IMAGE .
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" $CI_REGISTRY
- docker push $CONTAINER_TEST_IMAGE
...
deploy:
image: cdrx/rancher-gitlab-deploy
variables:
GIT_STRATEGY: none
script:
- upgrade --stack mystack --service myservice --no-start-before-stopping --no-wait-for-upgrade-to-finish
Everything works: When i change something on my source code, if i push it to master branch, i can see updates on my production rancher server.
But i can see a lot of images and containers with this commands:
docker images -a
docker ps -a
This images and containers are growing each time i am pushing something on master. I have the same problem on gitlab-ci server and rancher server.
So my question is: How can i automatically delete thoses images and containers ?
Thanks
If this is rancher version 1.x, look in the catalog and see if the item called "Janitor" will help. We use it to remove images left-over from build agents.

How to build, push and pull multiple docker containers with Gitlab CI?

I have a docker-compose file which builds two containers, a node app and a ngnix server. Now I would like to automate the build and run process on the server with the help of Gitlab runners. I am pretty new to CI-related stuff so please excuse my approach:
I would want to create multiple repositories on gitlab.com and have a Dockerfile for each one of these. Do I now have to associate a gitlab-runner instance with each of these projects in order to build the image, push it to a docker repo and let the server pull it from there? And then I would have to somehow push the docker-compose file on the server and compose everything from there.
So my questions are:
Am I able to run multiple (2 or 3) gitlab-runner for all of my repos on one server?
Do I need a specific or shared runner and what exactly is the difference?
Why are all tutorials using self hosted Gitlab instances instead of just using gitlab repos (Is it not possible to use gitlab-runner with gitlab.com repos?)
Is it possible to use docker-compose in a gitlab-runner pipeline and just build everything at once?
First of all, you can obviously use GitLab CI/CD features on https://gitlab.com as well as on self hosted GitLab instances. It doesn't change anything, except the host on which you will register your runner:
https://gitlab.com/ in case you uses GitLab without hosting it
https://your-custom-domain/ in case you host your own instance of GitLab
You can add as many runners as you want (I think so, and at least I have 5-6 runners per project without problem). You just need to register each of those runners for your project. See Registering Runners for that.
As for shared runners versus specific runner, I think you should stick to share runners if you wish to try GitLab CI/CD.
Shared Runners on GitLab.com run in autoscale mode and are powered by DigitalOcean. Autoscaling means reduced wait times to spin up builds, and isolated VMs for each project, thus maximizing security.
They're free to use for public open source projects and limited to 2000 CI minutes per month per group for private projects. Read about all GitLab.com plans.
You can install your own runners on literraly any machine though, for example your laptotp. You can deploy it with Docker for a quick start.
Finally, yes you can use docker-compose in a gitlab-ci.yml file if you use ssh executor and have docker-compose install on your server.
But I recommend using the docker executor and use docker:dind (Docker in Docker) image
What is Docker in Docker?
Although running Docker inside Docker is generally not recommended, there are > some legitimate use cases, such as development of Docker itself.
Here is an example usage, without docker-compose though:
image: docker:latest
services:
- name: docker:dind
command: ["--experimental"]
before_script:
- apk add --no-cache py-pip # <-- add python package install pip
- pip install docker-compose # <--- add docker-compose
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin # <---- Login to your registry
build-master:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest
only:
- master
build-dev:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG"
except:
- master
As you can see, I build the Docker image, tag it, then push it to my Docker registry, but you could push to any registry. And of course you could use docker-compose at any time in a script declaration
My Git repository looks like :
/my_repo
|---- .gitignore
|---- .gitlab-ci.yml
|---- Dockerfile
|---- README.md
And the config.toml of my runner looks like:
[[runners]]
name = "4Gb digital ocean vps"
url = "https://gitlab.com"
token = "efnrong44d77a5d40f74fc2ba84d8"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:dind"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
You can take a look at https://docs.gitlab.com/runner/configuration/advanced-configuration.html for more information about Runner configuration.
Note : All the variables used here are secret variables. See https://docs.gitlab.com/ee/ci/variables/ for explanations
I hope it answers your questions

How do i deploy from GitLab CI to Google Container Engine instance using Docker?

I am trying to set up automated deployment using a GitLab CI runner to deploy our 4-container app via docker-compose. I can pull the container images down using docker pull commands, but I'm stuck on how to connect to the Google Compute Engine instance in order to run the full docker-compose script.
Typically, from my local machine, I run something like:
eval $(docker-machine env <machine-instance>)
docker-compose up -d
But my .gitlab-ci.yml script doesn't have docker-machine available.
Do I have to install docker-machine via the script section in my
.gitlab-ci.yml file?
How do I provision the instance without
creating a new one every time? Normally, from my local host, I would
run docker-machine create ... once then just use the eval
command above to reconnect to the instance. But how would this work
with CI?
Here's a sample of my .gitlab-ci.yml:
deploy staging:
image: docker:latest
services:
- docker:dind
environment: staging
stage: deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN my-registry.githost.io
script:
- docker pull my-registry.githost.io/group/project1:develop
- docker pull my-registry.githost.io/group/project2:develop
- docker pull my-registry.githost.io/group/project3:develop
- docker pull my-registry.githost.io/group/project4:develop
- docker-machine ls
Not sure what you need docker-machine for in this case. You might want to get rid of it.
But to go back to your question, the docker image you're using does not come with neither docker-machine, nor docker-compose :
https://github.com/docker-library/docker/blob/36e2107fb879d5d5c3dbb5d8d93aeef0a2d45ac8/1.12/Dockerfile
So you will need to create a new image (or find an existing one) that comes with those two installed.
So in the .gitlab-ci.yml, instead of image: docker:latest, it's going to be something like image: mydocker
You maybe have to install docker-machine in the GitLab CI Runner to use it with GCE
https://docs.docker.com/machine/install-machine/
https://docs.docker.com/machine/drivers/gce/

Resources