I have a docker-compose file which builds two containers, a node app and a ngnix server. Now I would like to automate the build and run process on the server with the help of Gitlab runners. I am pretty new to CI-related stuff so please excuse my approach:
I would want to create multiple repositories on gitlab.com and have a Dockerfile for each one of these. Do I now have to associate a gitlab-runner instance with each of these projects in order to build the image, push it to a docker repo and let the server pull it from there? And then I would have to somehow push the docker-compose file on the server and compose everything from there.
So my questions are:
Am I able to run multiple (2 or 3) gitlab-runner for all of my repos on one server?
Do I need a specific or shared runner and what exactly is the difference?
Why are all tutorials using self hosted Gitlab instances instead of just using gitlab repos (Is it not possible to use gitlab-runner with gitlab.com repos?)
Is it possible to use docker-compose in a gitlab-runner pipeline and just build everything at once?
First of all, you can obviously use GitLab CI/CD features on https://gitlab.com as well as on self hosted GitLab instances. It doesn't change anything, except the host on which you will register your runner:
https://gitlab.com/ in case you uses GitLab without hosting it
https://your-custom-domain/ in case you host your own instance of GitLab
You can add as many runners as you want (I think so, and at least I have 5-6 runners per project without problem). You just need to register each of those runners for your project. See Registering Runners for that.
As for shared runners versus specific runner, I think you should stick to share runners if you wish to try GitLab CI/CD.
Shared Runners on GitLab.com run in autoscale mode and are powered by DigitalOcean. Autoscaling means reduced wait times to spin up builds, and isolated VMs for each project, thus maximizing security.
They're free to use for public open source projects and limited to 2000 CI minutes per month per group for private projects. Read about all GitLab.com plans.
You can install your own runners on literraly any machine though, for example your laptotp. You can deploy it with Docker for a quick start.
Finally, yes you can use docker-compose in a gitlab-ci.yml file if you use ssh executor and have docker-compose install on your server.
But I recommend using the docker executor and use docker:dind (Docker in Docker) image
What is Docker in Docker?
Although running Docker inside Docker is generally not recommended, there are > some legitimate use cases, such as development of Docker itself.
Here is an example usage, without docker-compose though:
image: docker:latest
services:
- name: docker:dind
command: ["--experimental"]
before_script:
- apk add --no-cache py-pip # <-- add python package install pip
- pip install docker-compose # <--- add docker-compose
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin # <---- Login to your registry
build-master:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest
only:
- master
build-dev:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG"
except:
- master
As you can see, I build the Docker image, tag it, then push it to my Docker registry, but you could push to any registry. And of course you could use docker-compose at any time in a script declaration
My Git repository looks like :
/my_repo
|---- .gitignore
|---- .gitlab-ci.yml
|---- Dockerfile
|---- README.md
And the config.toml of my runner looks like:
[[runners]]
name = "4Gb digital ocean vps"
url = "https://gitlab.com"
token = "efnrong44d77a5d40f74fc2ba84d8"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:dind"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
You can take a look at https://docs.gitlab.com/runner/configuration/advanced-configuration.html for more information about Runner configuration.
Note : All the variables used here are secret variables. See https://docs.gitlab.com/ee/ci/variables/ for explanations
I hope it answers your questions
Related
We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.
I looked at any other questions but can't find my own solution! I setting up a CI in gitlab and use the gitlab's shared runner. In build stage I used docker image as base image but when i use docker command it says :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I looked at this topic but still don't understand what should I do?
.gitlab-ci.yml :
stages:
- test
- build
- deploy
job_1:
image: python:3.6
stage: test
script:
- sh ./sh_script/install.sh
- python manage.py test -k
job_2:
image: docker:stable
stage: build
before_script:
- docker info
script:
- docker build -t my-docker-image .
I know that the gitlab runner must registered to use docker and share /var/run/docker.sock! But how to do this when using the gitlab own runner?
Ahh, that's my lovely topic - using docker for gitlab ci. The problem you are experiencing is better known as docker-in-docker.
Before configuring it, you may want to read this brilliant post: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
That will give you a bit of understanding what is the problem and which solution best fits you. Generally there are 2 major approaches: actual installation of docker daemon inside docker and sharing host's daemon to containers. Which approach to choose - depends on your needs.
In gitlab you can go in several ways, I will just share our experience.
Way 1 - using docker:dind as a service.
It is pretty simple to setup. Just add docker:dind as a shared service to your gitlab-ci.yml file and use docker:latest image for your jobs.
image: docker:latest # this sets default image for jobs
services:
- docker:dind
Pros:
simple to setup.
simple to run - your source codes are available by default to your job in cwd because they are being pulled directly to your docker runner
Cons: you have to configure docker registry for that service, otherwise you will get your Dockerfiles built from scratch each time your pipeline starts. As for me, it is unacceptable, because can take more than an hour depending on the number of containers you have.
Way 2 - sharing /var/run/docker.sock of host docker daemon
We setup our own docker executor with docker daemon and shared the socket by adding it in /etc/gitlab-runner/config.toml file. Thus we made our machine's docker daemon available to docker cli inside containers. Note - you DONT need privileged mode for executor in this case.
After that we can use both docker and docker-compose in our custom docker images. Moreover, we dont need special docker registry because in this case we share executor's registry among all containers.
Cons
You need to somehow pass sources to your containers in this case, because you get them mounted only to docker executor, but not to containers, launched from it. We've stopped on cloning them with command like git clone $CI_REPOSITORY_URL --branch $CI_COMMIT_REF_NAME --single-branch /project
I have created a stack which contains one container (service) on rancher.
This container has been created from an image which is hosted on a gitlab-ci project registry.
I want to force rancher to download a new version of this image and upgrade container.
I want to do this from a .gitlab-ci.yml script.
Here is an extract of my .gitlab-ci.yml:
(Please note i have set RANCHER_ACCESS_KEY, RANCHER_SECRET_KEY, RANCHER_URL secrets variables in gitlab web interface)
deploiement:
stage: deploiement
tags: [dockerrunnertag]
image: tagip/rancher-cli
script:
- rancher --debug up -d --stack "mystack"
- rancher --debug up -d --force-upgrade --pull --stack "mystack" --confirm-upgrade app
My problem is that gitlab is automaticly copy my source code into this tagip/rancher-cli container.
This container is temporary. I just want to run it in order to fire an action on rancher server.
How can i disable this fetching source code feature ?
Thanks
Yes it is possible.
Simply add the GIT_STRATEGY variable to your deployment job.
variables:
GIT_STRATEGY: none
Reference: https://gitlab.com/gitlab-org/gitlab-ce/issues/21337
I can setup a gitlab-runner with docker image as below:-
stages:
- build
- test
- deploy
image: laravel/laravel:v1
build:
stage: build
script:
- npm install
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan storage:link
test:
stage: test
script: echo "Running tests"
deploy_staging:
stage: deploy
script:
- echo "What shall I do?"
environment:
name: staging
url: https://staging.example.com
only:
- master
It can pass the build stage and test stage, and I believe a docker image/container is ready for deployment. From Google, I observe it may use "docker push" to proceed next step, such as push to AWS ECS, or somewhere of Gitlab. Actually I wish to understand, if I can push it directly to another remote server (e.g. by scp)?
A docker image is a combination of different Layers which are being built when you use the docker build command. It reuses existing Layers and gives the combination of layers a name, which is your image name. They are usually stored somewhere in /var/lib/docker.
In general all necessary data is stored on your system, yes. But it is not advised to directly copy these layers to a different machine and I am not quite sure if this would work properly. Docker advises you to use a "docker registry". Installing an own registry on your remote server is very simple, because the registry can also be run as a container (see the docs).
I'd advise you to stick to the proposed solution's from the docker team and use the public DockerHub registry or an own registry if you have sensitive data.
You are using GitLab. GitLab provides its own registry. You can push your images to your own Gitlab registry and pull it from your remote server. Your remote server only needs to authenticate against your registry and you're done. GitLab's CI can directly build and push your images to your own registry on each push to the master-branch, for example. You can find many examples in the docs.
Been trying to set-up Gitlab CI which can build a docker image, and came across that DinD was enabled initially only for separate runners and Blog Post suggest it would be enabled soon for shared runners,
Running DinD requires enabling privileged mode in runners, which is set as a flag while registering runner, but couldn't find an equivalent mechanism for Shared Runners
The shared runners are now capable of building Docker images. Here is the job that you can use:
stages:
- build
- test
- deploy
# ...
# other jobs here
# ...
docker:image:
stage: deploy
image: docker:1.11
services:
- docker:dind
script:
- docker version
- docker build -t $CI_REGISTRY_IMAGE:latest .
# push only for tags
- "[[ -z $CI_BUILD_TAG ]] && exit 0"
- docker tag $CI_REGISTRY_IMAGE:latest $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
This job assumes that you are using the Container Registry provided by Gitlab. It pushes the images only when the build commit is tagged with a version number.
Documentation for Predefined variables.
Note that you will need to cache or generate as temporary artifacts of any dependencies for your service which are not committed in the repository. This is supposed to be done in other jobs. e.g. node_modules are not generally contained in the repository and must be cached from the build/test stage.