Gitlab-runner installation - docker

I installed gitlab and docker in Ubuntu. Now I need to install gitlab-runner using docker executor. Is it necessary for the gitlab to be running in docker or is it enough if both runs on the same machine?

GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. So it just needs connectivity to Gilab and its done by registering the runner. Registering runner
Registering a Runner is the process that binds the Runner with a GitLab instance.
If you want to use Docker, GitLab Runner requires a minimum of Docker v1.13.0.
Allows to run:
Multiple jobs concurrently.
Use multiple tokens with multiple server (even per-project).
Limit number of concurrent jobs per-token.
Jobs can be run:
Locally.
Using Docker containers.
Using Docker containers and executing job over SSH.
Using Docker containers with autoscaling on different clouds and virtualization hypervisors.
Connecting to remote SSH server.
The GitLab Runner version should be in sync with the GitLab version, features may be not available or work properly if there’s a version difference.

Related

GitHub Container Actions in Containerised Runner

I have deployed a pool of self hosted GitHub runners as pods to my kubernetes cluster. Some of our pipelines contain jobs which run container actions. Is it possible to run those jobs in this type of runner?
Docker in Docker is configured in the deployment, and I can build docker images and push them to the container registry.
I note that the GitHub docs state:
If you want to run workflows that use Docker container actions or service containers, you must use a Linux machine and Docker must be installed.
I've struggled to find any definitive answers to this online

Proper way to deploy docker services via Gitlab CI/CD to an own server

My application is built using 3 Docker services:
backend (React)
frontend (Node.js)
nginx (routing traffic)
Up until now I was manually logging into an own Digital Ocean server, cloning the repository and launching the services with docker-compose build && docker-compose up.
I want to automate the process from now on.
Given Gitlab CI/CD Pipelines and the runners, what would be the best approach to automatically deploy the code to Digital Ocean server?
[WHAT I WAS THINKING OF, might seem very "beginner"]
Idea 1: Once a commit was pushed to master -> Gitlab runner will build the services and then copy it over to the DO server via scp. Problem: how do you launch the services? Do you connect to the DO server via ssh from the runner and then run the start script there?
Idea 2: Register a worker on the DO server just so when it pulls the data from Gitlab it has the code on the DO server itself. It just has to build them and run. But this approach is not scalable and seems hacky.
I am looking for some thinking guidelines or a step-by-step approach.
One of the benefits of using Docker in a production-deployment scenario is that you don't separately scp your application code; everything you need is built into the image.
If you're using an automation system like Ansible that can directly run containers on remote hosts then this is straightforward. Your CI system builds Docker images, tags them with some unique version stamp, and pushes them to a repository (Docker Hub, something provided by your cloud provider, one you run yourself). It then triggers the automation system to tell it to start containers with the image you built. (In the case of Ansible, it runs over ssh, so this is more or less equivalent to the other ssh-based options; tools like Chef or Salt Stack require a dedicated agent on the target system.)
If you don't have an automation system like that but you do have ssh and Docker Compose installed on the target system, then you can copy only the docker-compose.yml file to the target host, and then launch it.
TAG=...
docker push myname/myimage:$TAG
scp docker-compose.yml root#remote:
ssh root#remote env TAG=$TAG docker-compose up -d
A further option is to use a dedicated cluster manager like Kubernetes, and talk to its API; then the cluster will pull the updated containers itself, and you don't have to ssh anything. At the scale you're discussing this is probably much heavier weight than you need.

How to use a remote docker server from jenkins?

I got 2 servers, 1 Linux 2 AMI with Jenkins running and one RHEL with Docker running.
I would like to configure Jenkins in order to build and deploy an application on the Docker server. If I clone my repository on the Docker server, i'm running docker-compose build then docker-compose up and everything is working fine.
I find some documentation about using a remote docker server with jenkins but it doesn't work. Docker API is already open.
Strictly speaking, you can connect to a remote Docker Daemon by enabling the Remote API over TCP and using the docker client by setting the DOCKER_HOST environment variable. I would also suggest you configure encryption and authentication to have an additional layer of security and if you can restrict it to be only accessible from your Jenkins Slaves.
But as stated on the comment by David Maze, I don't think this is the best approach for deployment of containers as it carries some security risks that can compromise your servers.
I would suggest that if you are planning on running production workloads and you need a full pipeline for managing the lifecycle of your applications running on containers, you research Docker Swarm or Kubernetes as they are better alternatives suited for achieving this.

How to run build on docker container in coreos?

I have installed CoreOS as my build environment. I installed Jenkins server as a docker container in CoreOS. And I created a free style project on the Jenkins server to build my project. How can I configure the build run on docker containers on the CoreOS?
So the structure is: CoreOS is my physical machine. Jenkins server is running in a docker container in the CoreOS. And I want to launch more docker containers to run my application. How can I achieve this? The hardest part I think is to launch a docker container in CoreOS from Jenkins JOB. I want to start a new docker container ever time for a build.
I'm not familiar with Jenkins, but I would suggest that you take a look at the docker-machine and docker-compose utilities.
You should be able to have Jenkins use one of those to have the host start your build container.

Jenkins docker plugin and linked slaves

I wanted to be able to start multiple linked containers on demand, with a restrict where this build run tag like I do with docker plugin for one single container.
I'm currently running Jenkins inside a docker container and configured a slave cloud using docker plugin to provide a single slave container per job, this provisioning is done on demand by the plugin.
But now I have some new requirements, example:
Starting nodejs application container linked to selenium grid container for protractor e2e testing
Starting a container with a nodejs application linked to a redis server in another container.
Currently, docker plugin does not support linked containers so how should I approach those scenarios?
I know how to start multiple linked containers with docker-compose but there are currently no Jenkins plugins for compose.
I was able to get docker-in-docker working, and thought about having a DIND job with using compose in a pre-setup, but I'm finding this a quite inelegant solution.
Is there a plugin-wise solution?
Docker Slaves Plugin new version's side container feature solves that problem now!

Resources