Gitlab Continuous Integration on Docker - docker

I have a Gitlab server running on a Docker container: gitlab docker
On Gitlab there is a project with a simple Makefile that runs pdflatex to build pfd file.
On the Docker container I installed texlive and make, I also installed docker runner, command:
curl -sSL https://get.docker.com/ | sh
the .gitlab-ci.yml looks like follow:
.build:
script: &build_script
- make
build:
stage: test
tags:
- Documentation Build
script: *build
The job is stuck running and a message is shown:
This build is stuck, because the project doesn't have any runners online assigned to it
any idea?

The top comment on your link is spot on:
"Gitlab is good, but this container is absolutely bonkers."
Secondly looking at gitlab's own advice you should not be using this container on windows, ever.
If you want to use Gitlab-CI from a Gitlab Server, you should actually be installing a proper Gitlab server instance on a proper Supported Linux VM, with Omnibus, and should not attempt to use this container for a purpose it is manifestly unfit for: real production way to run Gitlab.
Gitlab-omnibus contains:
a persistent (not stateless!) data tier powered by postgres.
a chat server that's entire point in existing is to be a persistent log of your team chat.
not one, but a series of server processes that work together to give you gitlab server functionality and web admin/management frontend, in a design that does not seem ideal to me to be run in production inside docker.
an integrated CI build manager that is itself a Docker container manager. Your docker instance is going to contain a cache of other docker instances.
That this container was built by Gitlab itself is no indication you should actually use it for anything other than as a test/toy or for what Gitlab themselves actually use it for, which is probably to let people spin up Gitlab nightly builds, probably via kubernetes.

I think you're slightly confused here. Judging by this comment:
On the Docker container I installed texlive and make, I also installed
docker runner, command:
curl -sSL https://get.docker.com/ | sh
It seems you've installed docker inside docker and not actually installed any runners? This won't work if that's the case. The steps to get this running are:
Deploy a new gitlab runner. The quickest way to do this will be to deploy another docker container with the gitlab runner docker image. You can't run a runner inside the docker container you've deployed gitlab in. You'll need to make sure you select an executor (I suggest using the shell executor to get you started) and then you need to register the runner. There is more information about how to do this here. What isn't detailed here is that if you're using docker for gitlab and docker for gitlab-runner, you'll need to link the containers or set up a docker network so they can communicate with each other
Once you've deployed and registered the runner with gitlab, you will see it appear in http(s)://your-gitlab-server/admin/runners - from here you'll need to assign it to a project. You can also make it as "Shared" runner which will execute jobs from all projects.
Finally, add the .gitlab-ci.yml as you already have, and the build will work as expected.

Maybe you've set the wrong tags like me. Make sure the tag name with your available runner.
tags
- Documentation Build # tags is used to select specific Runners from the list of all Runners that are allowed to run this project.
see: https://docs.gitlab.com/ee/ci/yaml/#tags

Related

How can i auto deploy with gitlab-ci.yml on a remote server without GitLab Runner?

I want to auto deploy a simple docker-compose.yml file with database and api from gitlab-ci.yml. I have a ubuntu server running on a specific ip where i can pull the gitlab project and run it manualy with docker-compose up -d, but how can I achieve this automatically through gitlab-ci.yml, without using gitlab runner?
Gitlab CI ("Continuous Integration") inherently involves Gitlab Runners.
You can't use one without the other.
Whether you use a Gitlab Runner or not, you can achieve what you ask in a number of different ways, such as:
Use subprocess to invoke system commands like scp to copy files over and ssh <host> <command> to run remote commands.
Use paramiko to do the same thing in a more Pythonic way.
Use a provisioning tool such as Ansible
If you're not using a Gitlab Runner, your code would invoke these directly. For someone using a Gitlab Runner, their .gitlab-ci.yml would contain these scripts / script calls instead.

How to create a docker container inside docker in the GitLab CI/CD pipeline?

Since I do not have lots of experience with DevOps yet, I am struggling with finding an answer for the following question:
I'm setting up the CI/CD pipeline for my project (Python, FastAPI, Redis), which will have test and build stages. It can be described as follows:
Before stages: Install all dependencies (install python, copy files for testing, etc.)
The test stage uses docker-compose for running the Redis server, which is
necessary to launch the application for testing (unit test).
The build stage creates a new docker container
and pushes it to the Docker Hub if there is a new Gitlab tag.
The GitLab Runner is located on the AWS EC2 instance, the runner executor is a "docker" with an "Ubuntu:20.04" image. So, the question:
How to run "docker-compose"/"docker build" inside the docker executor and whether it can be done at all without any negative consequences?
I thought about several options:
Switch from docker executor to something else (maybe to shell or docker+ssh)
Use Docker-in-Docker, but I see cautions that it can be dangerous and not sure exactly why in my case.
What I've tried:
To use Redis as "services" in Gitlab job instead of docker-compose file, but I can't find a way to bind my application (host and port) to a server that runs inside the docker executor as a service.

Best practice using docker inside Jenkins?

Hi I'm learning how to use Jenkins integrated with Docker and I don't understand what should I do to communicate them.
I'm running Jenkins inside a Docker container and I want to build an image in a pipeline. So I need to execute some docker commands inside the Jenkins container.
So the thing here is where docker come from. I understand that we need to bind mount the docker host daemon (socket) to the Jenkins container but this container still needs the binaries to execute Docker.
I have seen some approaches to achieve this and I'm confused what should I do. I have seen:
bind mount the docker binary (/usr/local/bin/docker:/usr/bin/docker)
installing docker in the image
if I'm not wrong the blue ocean image comes with Docker pre-installed (I have not found any documentation of this)
Also I don't understand what Docker plugins for Jenkins can do for me.
Thanks!
Docker has a client server architecture. The server is the docker deamon and the client is basically the command line interface that allows you to execute docker ... from the command line.
Thus when running Jenkins inside Docker you will need access to connect to the deamon. This is acheieved by binding the /var/run/docker.sock into the container.
At this point you need something to communicate with the Deamon which is the server. You can either do that by providing access to docker binaries. This can be achived by either mounting the docker binaries, or installing the
client binaries inside the Jenkins container.
Alternatively, you can communicate with the deamon using the Docker Rest API without having the docker client binaries inside the Jenkins container. You can for instance build an image using the API.
Also I don't understand what Docker plugins for Jenkins can do for me
The Docker plugin for Jenkins isn't useful for the use case that you described. This plugin allows you to provision Jenkins slaves using Docker. You can for instance run a compilation inside a Docker container that gets automatically provisioned by Jenkins
It is not best practice to use Docker with Jenkins. It is also not a bad practice. The relationship between Jenkins and Docker is not determined in such a manner that having Docker is good or bad.
Jenkins is a Continuous Integration Server, which is a fancy way of saying "a service that builds stuff at various times, according to predefined rules"
If your end result is a docker image to be distributed, you have Jenkins call your docker build command, collect the output, and report on the success / failure of the docker build command.
If your end result is not a docker image, you have Jenkins call your non-docker build command, collect the output, and report on the success / failure of the non-docker build.
How you have the build launched depends on how you would build the product. Makefiles are launched with make, Apache Ant with ant, Apache Maven with mvn package, docker with docker build and so on. From Jenkin's perspective, it doesn't matter, provided you provide a complete set of rules to launch the build, collect the output, and report the success or failure.
Now, for the 'Docker plugin for Jenkins'. As #yamenk stated, Jenkins uses build slaves to perform the build. That plugin will launch the build slave within a Docker container. The thing built within that container may or may not be a docker image.
Finally, running Jenkins inside a docker container just means you need to bind your Docker-ized Jenkins to the external world, as #yamenk indicates, or you'll have trouble launching builds.
Bind mounting the docker binary into the jenkins image only works if the jenkins images is "close enough" - it has to contain the required shared libraries!
So when sing a standard jenkins/jenkins:2.150.1 within an ubuntu 18.04 this is not working unfortunately. (it looked so nice and slim ;)
So the the requirement is to build or find a docker image which contains a compatible docker client for the host docker service is.
Many people seem to install docker in their jenkins image....

CD with GitLab, docker and docker private registry

we need to automate the process of deployment. Let me point out the stack we use.
We have our own GitLab CE instance and private docker registry. On production server, application is run in container. After every master commit, GitLab CI builds the image with code in it, sends it to docker registry and this is where automation ends.
Deployment on production server could be performed by a few steps - stopping current application container, pulling newer one and run it.
What is the best way to automate this process?
I read about a couple of solutions (but I believe there is much more)
docker private registry pings to a production server that does all the above steps itself (script on production machine managed by eg. supervisor or something similar)
using docker machine to remotely manage run containers
What is the preferred way? Or you can recommend something else?
No need to use tools like swarm, kubernetes, etc. It's quite simple application. Thanks in advance.
How about install Gitlab-ci runner on your production machine? And perform a job after the push to registry on master called deploy and pin it to that machine using Gitlab CI tags.
The job simply pulls the image from the registry and restarts your service or whatever you have in place.
Something like:
deploy-job:
stage: deploy
tags:
- production
script:
- docker login myprivateregistry.com -u $SECRET_USER -p $SECRET_PASS
- docker pull $CI_REGISTRY_IMAGE:latest
- docker-compose down
- docker-compose up -d
I can think of four solutions
use watchtower on production server https://github.com/v2tec/watchtower
run a webhook server which is requests by your CI after pushing the image to the registry. https://github.com/adnanh/webhook
as already mentioned, run the CI on production too which finaly triggers your update commands.
enable docker api and update the container by requesting it from the CI

Jenkins and Docker

Is there a way to do automation with Jenkins to deploy and run containers? I heard we can use the Docker plugins for it. But there isn't any tutorials or info that explains how we can use Jenkins and Docker together. Anyone who uses them both care to share?
First off in my implementation of things Jenkins is actually a container in Docker.
Here's where it may seem things get bizarre: I actually install docker-ce inside of that container, not because I want to run Docker-in-Docker. I disable the Docker daemon from running (sysctl) but I want the command line.
I install docker-compose and docker-machine on the Jenkins host and add the "jenkins" userid to the docker group.
There's a bunch of other steps that I do but basically they are the same steps that a user is going to go through (except it's all in my Docker file) and I add the results of "docker-machine env" to the global variables in the Jenkins configuration.
head spinning yet?
Applications I have Jenkins deploying all have a "jenkins" subdirectory with a Jenkins file in it to perform the dirty work as a pipeline. (build/test/deploy)
Deployments for Java apps for instance involve copying the warfile for the application to the correct directory and when the container (or containers) start the application engine (tomcat, Jboss, whatever) picks it up and the application runs.
Have a look at
https://registry.hub.docker.com/search?q=jenkins&searchfield=
and at some Dockerfiles such as
https://registry.hub.docker.com/u/niaquinto/jenkins/dockerfile/
or
https://registry.hub.docker.com/u/aespinosa/jenkins/dockerfile/

Resources