Enable docker for gitlab ci community edition - docker

I am having difficulties with enabling docker for build job. This is how gitlab ci config file looks like:
image: docker:latest
services:
- docker:dind
stages:
- build
build:
image: java:8
stage: build
script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/...
- sbt server/docker:publish
And here is the output from job:
gitlab-ci-multi-runner 1.3.2 (0323456)
Using Docker executor with image java:8 ...
Pulling docker image docker:dind ...
Starting service docker:dind ...
Waiting for services to be up and running...
Pulling docker image java:8 ...
Running on runner-30dcea4b-project-1408237-concurrent-0 via runner-30dcea4b-machine-1470340415-c2bbfc45-digital-ocean-4gb...
Cloning repository...
Cloning into '/builds/.../...'...
Checking out 9ba87ff0 as master...
$ docker info
/bin/bash: line 42: docker: command not found
ERROR: Build failed: exit code 1
Any clues why docker is not found?

After few days of struggling, I came up with following setup:
image: gitlab/dind
stages:
- test
- build
before_script:
- echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections
- apt-get update
- apt-get install -y curl
- apt-get install -y software-properties-common python-software-properties
- add-apt-repository -y ppa:webupd8team/java
- apt-get update
- apt-get install -y oracle-java8-installer
- rm -rf /var/lib/apt/lists/*
- rm -rf /var/cache/oracle-jdk8-installer
- apt-get update -yqq
- apt-get install apt-transport-https -yqq
- echo "deb http://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list
- apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
- apt-get update -yqq
- apt-get install sbt -yqq
- sbt sbt-version
test:
stage: test
script:
- sbt scalastyle && sbt test:scalastyle
- sbt clean coverage test coverageReport
build:
stage: build
script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/...
- sbt server/docker:publish
It has docker (mind gitlab/dind image), java and sbt. Now I can push to gitlab registry from sbt docker plugin.

docker info command is running inside java:8 based container which will not have docker installed/available in it.

Related

ci/cd "docker" command not found in image built with docker installed

I have this .gitlab-ci.yml file wanting to automate the docker image building, basically I'm using the one from the template:
docker-build:
image: my_image_build_with_docker_inside_inprivate_repo
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
#$CI_REGISTRY_IMAGE = my_image_build_with_docker_inside_inprivate_repo
When I run it get this error:
$ docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGIST
/bin/bash: line 132: docker: command not found
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
With : RUN apt-get install -y docker-compose , I get everything needed for docker to run in the debian image. But I still get the error of docker command not found as above. What are the other steps needed to run docker daemon from your custom image?
Update: From the private repository, pushing images with docker installed is not allowed either. Looks like I have to use something called kaniko. Any good resource for this?
You can try installing docker in the Dockerfile to build your custom image.
You can follow the steps defined in the official docs https://docs.docker.com/engine/install/debian/ which would look like something like this in your Dockerfile
RUN apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

In Gitlab-ci is there a way to run before scripts only after rule is match

in gitlab-ci I have
before_script:
- apt update && apt upgrade -y
- apt install -y
and in my job on stages I added a rule
merge_request:
stage: test
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event" && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != "master"
- if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "master"
what's happening is that the job is triggered in his stage as the before script is running, then it gets to the rule and sees that there's nothing to do.
That slows down the pipeline.
Is there a way to have the rule before the "before_script"
here's the pipeline
Thank you
Found the answer here
.before_script_template:
before_script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
test_integration:
extends: .before_script_template
stage: test
script:
- docker-compose -f CI/backend-service/docker-compose.yml up -d
- npm install
- npm run-script integration-test
build:
extends: .before_script_template
stage: build
script:
- npm install
- export VERSION=`git describe --tags --always`
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE
etc
I would rather put the before script commands in a separate stage, and keep the rules in the first stage.
That way, the first stage triggers only if the rules match.
It can run the commands of the formerly "before_script".
And the next stages can go on with your script.

Is there a way to run sbt-native-packager within Gitlab shared runner?

I've been trying to build docker images in Gitlab shared runner. I'm building my application using image: "hseeberger/scala-sbt:11.0.6_1.3.10_2.11.12" image normally and I locally build the docker image with sbt-native-packager which made me think that i need to use DiD service.
I'm currently having an issue which the sbt-native-packager cannot locate docker executable and fails to build the image. What I am missing here?
My script is as follows:
docker:
stage: deploy
image: "hseeberger/scala-sbt:11.0.6_1.3.10_2.11.12"
services:
- name: docker:dind
script:
- sbt docker:publishLocal
- docker push registry.gitlab.com/groupName/moduleName
The following actually did the trick for me although it is quite heavy to install Docker in the runner every time, however, that's the only way I could make it work.
docker:image:
stage: release
image: "hseeberger/scala-sbt:11.0.6_1.3.10_2.11.12"
before_script:
- apt-get update
- apt-get install sudo
- apt-get install apt-transport-https ca-certificates curl software-properties-common -y
- curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get install docker-ce -y
- sudo service docker start
- docker login <to your account>
script:
- sbt docker:publishLocal
- docker tag module:latest registry.gitlab.com/moduleGroup/module:0.1-SNAPSHOT
- docker push registry.gitlab.com/moduleGroup/module
I've built and published docker images that contains sbt, docker and git; to simplify this common task. you can use it from here
just use one of the built images, for example:
hnaderi/sbt-ci-image:openjdk-11.0.16_1.8.0_3.2.0_20.10-cli

How to push a docker image to Google Cloud Registry using GitLab ci pipeline

I am trying to push a Docker image to Google Cloud Registry via the GitLab ci pipeline.
The image builds but when its time to push to registry i get the following error.
denied: Token exchange failed for project 'nice-column-247216'. Caller
does not have permission 'storage.buckets.get'. To configure
permissions, follow instructions at:
https://cloud.google.com/container-registry/docs/access-control
.gitlab-ci.yml
stages:
- security
- quality
- test
- build
- deploy
image: node:10.16.0
services:
- mongo
- docker:dind
.before_script_template: &before_docker_script
before_script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- echo "$SERVICE_ACCOUNT_KEY" > key.json
- docker login -u _json_key --password-stdin https://eu.gcr.io < key.json
build:
stage: build
<<: *before_docker_script
variables:
DOCKER_IMAGE_TAG: 'eu.gcr.io/nice-column-247216/my-application'
script:
- docker build --cache-from "${DOCKER_IMAGE_TAG}" -t "${DOCKER_IMAGE_TAG}" .
- docker push ${DOCKER_IMAGE_TAG}
As you can see I am logging in to Docker via the json key. You can see in the below image the permissions this token has, both Storage Admin and Storage Object Viewer.

Run docker in docker on a dedicated server?

I will run docker on a dedicated sever by a service provider. It is not
possible to install docker on this server. Apache, Git and a lot more is
installed. So I try to run docker in a container. I will pull a docker image
from the gitlab registry and run in a sub domain. I wrote a .gitlab-ci.yml. But
I get an error message.
I found this answer:
You can't (*) run Docker inside Docker containers or images. You can't (*)
start background services inside a Dockerfile. As you say, commands like
systemctl and service don't (*) work inside Docker anywhere. And in any case
you can't use any host-system resources, including the host's Docker socket,
from anywhere in a Dockerfile.
How do I solve this problem?
$ sudo docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
ERROR: Job failed: exit code 1
.gitlab-ci.yml
image: ubuntu:latest
before_script:
- apt-get update
- apt-get install -y apt-transport-https ca-certificates curl software-properties-common
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
stages:
- test
test:
stage: test
script:
- apt-get update
- apt-cache search docker-ce
- apt-get install -y docker-ce
- docker run -d hello-world
This .gitlab-ci.yml works for me.
image: ubuntu:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375/
before_script:
- apt-get update
- apt-get install -y apt-transport-https ca-certificates curl software-properties-common
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update
stages:
- test
test:
stage: test
script:
- apt-get install -y docker-ce
- docker info
- docker pull ...
- docker run -p 8000:8000 -d --name ...
The answer that you found.... is a little old. There are options to run systemd in a container and it is also possible to run some systemctl-replacement script.
However, I am not sure what application you really want to install.

Resources