gitlab-runner - deploy docker image to a server - docker

I can setup a gitlab-runner with docker image as below:-
stages:
- build
- test
- deploy
image: laravel/laravel:v1
build:
stage: build
script:
- npm install
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan storage:link
test:
stage: test
script: echo "Running tests"
deploy_staging:
stage: deploy
script:
- echo "What shall I do?"
environment:
name: staging
url: https://staging.example.com
only:
- master
It can pass the build stage and test stage, and I believe a docker image/container is ready for deployment. From Google, I observe it may use "docker push" to proceed next step, such as push to AWS ECS, or somewhere of Gitlab. Actually I wish to understand, if I can push it directly to another remote server (e.g. by scp)?

A docker image is a combination of different Layers which are being built when you use the docker build command. It reuses existing Layers and gives the combination of layers a name, which is your image name. They are usually stored somewhere in /var/lib/docker.
In general all necessary data is stored on your system, yes. But it is not advised to directly copy these layers to a different machine and I am not quite sure if this would work properly. Docker advises you to use a "docker registry". Installing an own registry on your remote server is very simple, because the registry can also be run as a container (see the docs).
I'd advise you to stick to the proposed solution's from the docker team and use the public DockerHub registry or an own registry if you have sensitive data.
You are using GitLab. GitLab provides its own registry. You can push your images to your own Gitlab registry and pull it from your remote server. Your remote server only needs to authenticate against your registry and you're done. GitLab's CI can directly build and push your images to your own registry on each push to the master-branch, for example. You can find many examples in the docs.

Related

Use Paketo.io / CloudNativeBuildpacks (CNB) in GitLab CI with Kubernetes executor & unprivileged Runners (without pack CLI & docker)

We want to use Paketo.io / CloudNativeBuildpacks (CNB) GitLab CI in the most simple way. Our GitLab setup uses an AWS EKS cluster with unprivileged GitLab CI Runners leveraging the Kubernetes executor. We also don't want to introduce security risks by using Docker in our builds. So we don't have our host’s /var/run/docker.sock exposed nor want to use docker:dind.
We found some guides on how to use Paketo with GitLab CI like this https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/ . But as described beneath the headline Use Cloud Native Buildpacks with GitLab in GitLab Build Job WITHOUT Using the GitLab Build Template, the approach relies on Docker and pack CLI. We tried to resemble this in our .gitlab-ci.yml which looks like this:
image: docker:20.10.9
stages:
- build
before_script:
- |
echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)"
apk add --no-cache curl
(curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.21.1/pack-v0.21.1-linux.tgz" | tar -C /usr/local/bin/ --no-same-owner -xzv pack)
build-image:
stage: build
script:
- pack --version
- >
pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest
--builder paketobuildpacks/builder:base
--path .
But as outlined our setup does not support docker and we end up with the following error inside our logs:
...
$ echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)" # collapsed multi-line command
install pack CLI (see https://buildpacks.io/docs/tools/pack/)
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r0)
(4/4) Installing curl (7.79.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 12 MiB in 26 packages
pack
$ pack --version
0.21.1+git-e09e397.build-2823
$ pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest --builder paketobuildpacks/builder:base --path .
ERROR: failed to build: failed to fetch builder image 'index.docker.io/paketobuildpacks/builder:base': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Any idea on how to use Paketo Buildpacks with GitLab CI without having Docker present inside our GitLab Kubernetes runners (which seems to be kind of a best practice)? We also don't want our setup to become to complex - e.g. by adding kpack.
TLDR;
Use the Buildpack's lifecycle directly inside your .gitlab-ci.yml here's a fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
The details: "using the lifecycle directly"
There are ongoing discussions about this topic. Especially have a look into https://github.com/buildpacks/pack/issues/564 and https://github.com/buildpacks/pack/issues/413#issuecomment-565165832. As stated there:
If you're looking to build images in CI (not locally), I'd encourage
you to use the lifecycle directly for that, so that you don't need
Docker. Here's an example:
The link to the example is broken, but it refers to the Tekton implementation on how to use buildpacks in a Kubernetes environment. Here we can get a first glue about what Stephen Levine referred to as "to use the lifecycle directly". Inside it the crucial point is the usage of command: ["/cnb/lifecycle/creator"]. So this is the lifecycle everyone is talking about! And there's good documentaion about this command that could be found in this CNB RFC.
Choosing a good image: paketobuildpacks/builder:base
So how to develop a working .gitlab-ci.yml? Let's start simple. Digging into the Tekton implementation you'll see that the lifecycle command is executed inside an environment defined in BUILDER_IMAGE, which itself is documented as The image on which builds will run (must include lifecycle and compatible buildpacks). That sound's familiar! Can't we simply pick the builder image paketobuildpacks/builder:base from our pack CLI command? Let's try this locally on our workstation before commiting to much noise into our GitLab. Choose a project you want to build (I created a example Spring Boot app if you'd like at gitlab.com/jonashackt/microservice-api-spring-boot you can clone) and run:
docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash
Now inside the paketobuildpacks/builder image powered container try to run the Paketo lifecycle directly with:
/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest
I only used the -app parameter of the many possible parameters for the creator command, since most of them have quite good defaults. But as the default app directory path is not the default /workspace - but the current directory, I configured it. Also we need to define an <image-name> at the end, which will simply be used as the resulting container image name.
The first .gitlab-ci.yml
Both commands did work at my local workstation, so let's finally create a .gitlab-ci.yml using this approach (here's a fully working example .gitlab-ci.yml):
image: paketobuildpacks/builder
stages:
- build
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
docker login without docker
As we don't have docker available inside our Kubernetes Runners, we can't login into GitLab Container Registry as described in the docs. So the following error occured to me using this first approach:
===> ANALYZING
ERROR: failed to get previous image: connect to repo store "gitlab.yourcompanyhere.cloud:4567/yourgroup/microservice-api-spring-boot:latest": GET https://gitlab.yourcompanyhere.cloud/jwt/auth?scope=repository%3Ayourgroup%2Fmicroservice-api-spring-boot%3Apull&service=container_registry: DENIED: access forbidden
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Using the approach described in this so answer fixed the problem. We need to create a ~/.docker/config.json containing the GitLab Container Registry login information - and then the Paketo build will pick them up, as stated in the docs:
If CNB_REGISTRY_AUTH is unset and a docker config.json file is
present, the lifecycle SHOULD use the contents of this file to
authenticate with any matching registry.
Inside our .gitlab-ci.yml this could look like:
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
Our final .gitlab-ci.yml
As we're using the image: paketobuildpacks/builder at the top of our .gitlab-ci.yml, we can now leverage the lifecycle directly. Which is what we wanted to do in the first place. Only remember to use the correct GitLab CI variables to describe your <image-name> like this:
/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Otherwise the Buildpack process analyser step will break and it finally won't get pushed to the GitLab Container Registry. So finally our .gitlab-ci.yml looks like this (here's the fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Our builds should now run successfully using Paketo/Buildpacks without pack CLI and Docker:
See the full log of the example project here.

Automate local deployment of docker containers with gitlab runner and gitlab-ci without privileged user

We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.

How to use .gitlab-ci.yml on gitlab.com to build from Dockerfile and copy artifact from Dockerfile to GitLab public directory?

While I can successfully write a .gitlab-ci.yml on gitlab.com that passes my artifact to public/,
image: ruby:2.3
pages:
script:
- bundle install
- bundle exec jekyll build -d public
artifacts:
paths:
- public
I would like .gitlab-ci.yml (a container (#1) based on a docker image (#2)) to instead use a Dockerfile to build a different image (#3) to create the artifact and, again, pass the artifact from #3 to container#1/public/ so it is publicly accessible on the web.
# Dockerfile
FROM ruby:2.3
...
The rationale for this is that I might share the repo with a colleague who understands Dockerfiles but is not interested in using GitLab or .gitlab-ci.yml. So I want .gitlab-ci.yml to use Dockerfile for my purposes and my colleague can use Dockerfile in whatever way is most comfortable for them--Dockerfile is the shared, reproducible way of making the artifact.
It seems I can build Docker images in GitLab CI/CD, but I'm not sure I can do so on gitlab.com rather than self-hosted GitLab. This would then allow me to push image to a registry and use the image from the registry in a later stage (including the artifact). Although I'm not sure how I would extract it?
docker build -t myimagename . does not work in GitLab CI/CD on gitlab.com (unless I add in some services to the .gitlab-ci.yml?). Even if it did, how would I extract the artifact from the myimagename container into the GitLab CI/CD "container"?

Keeping docker builds in Gitlab CI with docker-compose

I have a repository that includes three parts: frontend, admin and server. Each contains its own Dockerfile.
After building the image I wanted to add a test for admin. My tests go through but take a lot of time because it pulls the base image and builds everything from scratch on each stage (like 8mins per stage). This is my .gitlab-ci.yml
image: tmaier/docker-compose
services:
- docker:dind
stages:
- build
- test
build:
stage: build
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker-compose build
- docker-compose push
test:admin:
stage: test
script:
- docker-compose -f docker-compose.yml -f docker-compose.test.yml up admin
I am not quite sure if I need to push/pull images between stages or if I should do that with artifacts/cache/whatever. As I understood I only need to push/pull if I want to deploy my images to another server. But also I added a docker-compose push which runs through but Gitlab doesn't show me any images in my registry.
I have been researching a lot on this but most example code I found was only about a single docker container and they didn't make use of docker-compose.
Any ideas? :)
Gitlab currently has no way to share Docker images between stages as artifacts. They have had an outstanding feature request for this for 3 years.
You'll need to push the docker image to the docker registry and pull it in later stages that need it. (Or do everything related to the image in one stage)
Mark could you show the files docker-compose.yml docker-compose.test.yml?
May be you try to push and pull different images. BTW try place docker login at before_script section that make it works at all jobs.

How to build, push and pull multiple docker containers with Gitlab CI?

I have a docker-compose file which builds two containers, a node app and a ngnix server. Now I would like to automate the build and run process on the server with the help of Gitlab runners. I am pretty new to CI-related stuff so please excuse my approach:
I would want to create multiple repositories on gitlab.com and have a Dockerfile for each one of these. Do I now have to associate a gitlab-runner instance with each of these projects in order to build the image, push it to a docker repo and let the server pull it from there? And then I would have to somehow push the docker-compose file on the server and compose everything from there.
So my questions are:
Am I able to run multiple (2 or 3) gitlab-runner for all of my repos on one server?
Do I need a specific or shared runner and what exactly is the difference?
Why are all tutorials using self hosted Gitlab instances instead of just using gitlab repos (Is it not possible to use gitlab-runner with gitlab.com repos?)
Is it possible to use docker-compose in a gitlab-runner pipeline and just build everything at once?
First of all, you can obviously use GitLab CI/CD features on https://gitlab.com as well as on self hosted GitLab instances. It doesn't change anything, except the host on which you will register your runner:
https://gitlab.com/ in case you uses GitLab without hosting it
https://your-custom-domain/ in case you host your own instance of GitLab
You can add as many runners as you want (I think so, and at least I have 5-6 runners per project without problem). You just need to register each of those runners for your project. See Registering Runners for that.
As for shared runners versus specific runner, I think you should stick to share runners if you wish to try GitLab CI/CD.
Shared Runners on GitLab.com run in autoscale mode and are powered by DigitalOcean. Autoscaling means reduced wait times to spin up builds, and isolated VMs for each project, thus maximizing security.
They're free to use for public open source projects and limited to 2000 CI minutes per month per group for private projects. Read about all GitLab.com plans.
You can install your own runners on literraly any machine though, for example your laptotp. You can deploy it with Docker for a quick start.
Finally, yes you can use docker-compose in a gitlab-ci.yml file if you use ssh executor and have docker-compose install on your server.
But I recommend using the docker executor and use docker:dind (Docker in Docker) image
What is Docker in Docker?
Although running Docker inside Docker is generally not recommended, there are > some legitimate use cases, such as development of Docker itself.
Here is an example usage, without docker-compose though:
image: docker:latest
services:
- name: docker:dind
command: ["--experimental"]
before_script:
- apk add --no-cache py-pip # <-- add python package install pip
- pip install docker-compose # <--- add docker-compose
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin # <---- Login to your registry
build-master:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":latest
only:
- master
build-dev:
stage: build
script:
- docker build --squash --pull -t "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_USER"/"$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_SLUG"
except:
- master
As you can see, I build the Docker image, tag it, then push it to my Docker registry, but you could push to any registry. And of course you could use docker-compose at any time in a script declaration
My Git repository looks like :
/my_repo
|---- .gitignore
|---- .gitlab-ci.yml
|---- Dockerfile
|---- README.md
And the config.toml of my runner looks like:
[[runners]]
name = "4Gb digital ocean vps"
url = "https://gitlab.com"
token = "efnrong44d77a5d40f74fc2ba84d8"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:dind"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
You can take a look at https://docs.gitlab.com/runner/configuration/advanced-configuration.html for more information about Runner configuration.
Note : All the variables used here are secret variables. See https://docs.gitlab.com/ee/ci/variables/ for explanations
I hope it answers your questions

Resources