Docker and Gitlab - how to modify the docker run - docker

I'm very new to contentious integration with Docker and Gitlab.
I have a situation where my script in .gitlab-ci.yml needs to encode files with ioncube, but that's now fully possible due to some security restrictions that Docker has placed. Therefore, I need to modify the docker run command that Gitlab runs when I start a job for my Gitlab project.
According to this page...
In addition, a change to the Docker security options on the container will be required to allow for the licensing process to function by using the –security-opt seccomp:unconfined option to the docker run command.
I need to adding that extra parameter to the docker run call, but since Gitlab does that somewhere, I have no idea how to proceed.
Is there a way I can get Gitlab to include –security-opt seccomp:unconfined when I run a job?
EDIT: I host Gitlab on my own server.

The GitLab CI Process executes it's Pipeline stages/builds via a GitLab Runner. (https://docs.gitlab.com/runner/).
The GitLab Runner is registered to a GitLab instance or a specific GitLab Project. The configuration that you specify in the gitlab-ci.yml file is what gets executed by the Runner. In your case, you're specifying the GitLab Runner to execute a Docker container.
There is some advanced configuration that you can do with the GitLab Runners (https://docs.gitlab.com/runner/configuration/advanced-configuration.html). The setting that you are looking for is in this section: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-docker-section.
On the server that is hosting your GitLab Runner (or in the Docker instance that is hosting your GitLab Runner) modify the config.toml file (probably at /etc/gitlab-runner/config.toml). You should see a [runners.docker] section if you've registered this Runner to execute Docker containers. It is in this section that you want to add in:
security_opt: ["seccomp:unconfined"]

Related

How can i auto deploy with gitlab-ci.yml on a remote server without GitLab Runner?

I want to auto deploy a simple docker-compose.yml file with database and api from gitlab-ci.yml. I have a ubuntu server running on a specific ip where i can pull the gitlab project and run it manualy with docker-compose up -d, but how can I achieve this automatically through gitlab-ci.yml, without using gitlab runner?
Gitlab CI ("Continuous Integration") inherently involves Gitlab Runners.
You can't use one without the other.
Whether you use a Gitlab Runner or not, you can achieve what you ask in a number of different ways, such as:
Use subprocess to invoke system commands like scp to copy files over and ssh <host> <command> to run remote commands.
Use paramiko to do the same thing in a more Pythonic way.
Use a provisioning tool such as Ansible
If you're not using a Gitlab Runner, your code would invoke these directly. For someone using a Gitlab Runner, their .gitlab-ci.yml would contain these scripts / script calls instead.

How to create a docker container inside docker in the GitLab CI/CD pipeline?

Since I do not have lots of experience with DevOps yet, I am struggling with finding an answer for the following question:
I'm setting up the CI/CD pipeline for my project (Python, FastAPI, Redis), which will have test and build stages. It can be described as follows:
Before stages: Install all dependencies (install python, copy files for testing, etc.)
The test stage uses docker-compose for running the Redis server, which is
necessary to launch the application for testing (unit test).
The build stage creates a new docker container
and pushes it to the Docker Hub if there is a new Gitlab tag.
The GitLab Runner is located on the AWS EC2 instance, the runner executor is a "docker" with an "Ubuntu:20.04" image. So, the question:
How to run "docker-compose"/"docker build" inside the docker executor and whether it can be done at all without any negative consequences?
I thought about several options:
Switch from docker executor to something else (maybe to shell or docker+ssh)
Use Docker-in-Docker, but I see cautions that it can be dangerous and not sure exactly why in my case.
What I've tried:
To use Redis as "services" in Gitlab job instead of docker-compose file, but I can't find a way to bind my application (host and port) to a server that runs inside the docker executor as a service.

How does gitlab-ci works internally with gitlab runner?

I have some specific questions regarding gitlab-ci and runner:
If my specific runner is configured in kubernetes cluster then how code mirroring happens into runner from Gitlab code repository
How does the build happens in runner when it is configured within kubernetes cluster?
When using any docker image in my .gitlab-ci.yml, how does those images are pulled by runner and how does commands mentioned within "script" tag are executed into those docker containers? Does runner creates pods within the kubernetes cluster (where runner is configured) with the image mentioned within .gitlab-ci.yml, and executes commands within those containers?
Any additional explanations or references to learning material on how Gitlab runner works internally is highly appreciated.
I'm assuming when you say your GitLab Runner is configured in Kubernetes you mean you're using the Kubernetes executor. I marked the sections relevant to your questions.
(1) GitLab CI pulls the code from the repository (if public it's not an issue, but you can also use a private registry). Basically a helper image is used to clone the repository and download any artifacts into a container.
The Kubernetes executor lets you use an existing Kubernetes cluster to execute your pipeline/build step by calling the Kubernetes cluster API and creating a new Pod, with both build and services containers for each job. (3)
A more detailed view of the steps a Runner takes:
Prepare: Create the Pod against the Kubernetes Cluster. This creates the containers required for the build and services to run.
Pre-build: Clone, restore cache and download artifacts from previous stages. This is run on a special container as part of the Pod. (2)
Build: User build.
Post-build: Create cache, upload artifacts to GitLab. This also uses the special container as part of the Pod.
The GitLab repository for the runners might also be interesting for you.

CD with GitLab, docker and docker private registry

we need to automate the process of deployment. Let me point out the stack we use.
We have our own GitLab CE instance and private docker registry. On production server, application is run in container. After every master commit, GitLab CI builds the image with code in it, sends it to docker registry and this is where automation ends.
Deployment on production server could be performed by a few steps - stopping current application container, pulling newer one and run it.
What is the best way to automate this process?
I read about a couple of solutions (but I believe there is much more)
docker private registry pings to a production server that does all the above steps itself (script on production machine managed by eg. supervisor or something similar)
using docker machine to remotely manage run containers
What is the preferred way? Or you can recommend something else?
No need to use tools like swarm, kubernetes, etc. It's quite simple application. Thanks in advance.
How about install Gitlab-ci runner on your production machine? And perform a job after the push to registry on master called deploy and pin it to that machine using Gitlab CI tags.
The job simply pulls the image from the registry and restarts your service or whatever you have in place.
Something like:
deploy-job:
stage: deploy
tags:
- production
script:
- docker login myprivateregistry.com -u $SECRET_USER -p $SECRET_PASS
- docker pull $CI_REGISTRY_IMAGE:latest
- docker-compose down
- docker-compose up -d
I can think of four solutions
use watchtower on production server https://github.com/v2tec/watchtower
run a webhook server which is requests by your CI after pushing the image to the registry. https://github.com/adnanh/webhook
as already mentioned, run the CI on production too which finaly triggers your update commands.
enable docker api and update the container by requesting it from the CI

Gitlab Continuous Integration on Docker

I have a Gitlab server running on a Docker container: gitlab docker
On Gitlab there is a project with a simple Makefile that runs pdflatex to build pfd file.
On the Docker container I installed texlive and make, I also installed docker runner, command:
curl -sSL https://get.docker.com/ | sh
the .gitlab-ci.yml looks like follow:
.build:
script: &build_script
- make
build:
stage: test
tags:
- Documentation Build
script: *build
The job is stuck running and a message is shown:
This build is stuck, because the project doesn't have any runners online assigned to it
any idea?
The top comment on your link is spot on:
"Gitlab is good, but this container is absolutely bonkers."
Secondly looking at gitlab's own advice you should not be using this container on windows, ever.
If you want to use Gitlab-CI from a Gitlab Server, you should actually be installing a proper Gitlab server instance on a proper Supported Linux VM, with Omnibus, and should not attempt to use this container for a purpose it is manifestly unfit for: real production way to run Gitlab.
Gitlab-omnibus contains:
a persistent (not stateless!) data tier powered by postgres.
a chat server that's entire point in existing is to be a persistent log of your team chat.
not one, but a series of server processes that work together to give you gitlab server functionality and web admin/management frontend, in a design that does not seem ideal to me to be run in production inside docker.
an integrated CI build manager that is itself a Docker container manager. Your docker instance is going to contain a cache of other docker instances.
That this container was built by Gitlab itself is no indication you should actually use it for anything other than as a test/toy or for what Gitlab themselves actually use it for, which is probably to let people spin up Gitlab nightly builds, probably via kubernetes.
I think you're slightly confused here. Judging by this comment:
On the Docker container I installed texlive and make, I also installed
docker runner, command:
curl -sSL https://get.docker.com/ | sh
It seems you've installed docker inside docker and not actually installed any runners? This won't work if that's the case. The steps to get this running are:
Deploy a new gitlab runner. The quickest way to do this will be to deploy another docker container with the gitlab runner docker image. You can't run a runner inside the docker container you've deployed gitlab in. You'll need to make sure you select an executor (I suggest using the shell executor to get you started) and then you need to register the runner. There is more information about how to do this here. What isn't detailed here is that if you're using docker for gitlab and docker for gitlab-runner, you'll need to link the containers or set up a docker network so they can communicate with each other
Once you've deployed and registered the runner with gitlab, you will see it appear in http(s)://your-gitlab-server/admin/runners - from here you'll need to assign it to a project. You can also make it as "Shared" runner which will execute jobs from all projects.
Finally, add the .gitlab-ci.yml as you already have, and the build will work as expected.
Maybe you've set the wrong tags like me. Make sure the tag name with your available runner.
tags
- Documentation Build # tags is used to select specific Runners from the list of all Runners that are allowed to run this project.
see: https://docs.gitlab.com/ee/ci/yaml/#tags

Resources