Gitlab-Runner: Permission denied on cloning from master - docker

I was looking for a method to implement a CI/CD pipeline within my projects. I decided to use Gitlab with its gitlab-runner technology. I tried to use it through docker containers but, after more than 100 attempts, I decided to install it on the machine.
I followed the official Gitlab guide step by step. Everything is working perfectly; I run the register, fill all the fields correctly and I go on to write the .gitlab-ci.yml:
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
As you can imagine when looking at the yml file, when some operation is performed on the master, the pipeline starts and executes a docker-compose up --build -d (the project in question is a PHP application with a SQL database deployed through a compose).
First run:
Absolutely perfect; the pipeline starts, the build is executed correctly and is correctly put in online
Second and following 140 runs:
That's the nightmare. Over 140 builds failed for the same reason; when cloning the repository, the runner doesn't seem to have write permissions on his home directory (/home/gitlab-runner/builds/...).
If I manually delete the nested folder inside builds/ the runner works, but only for one run, then same situation.
I tried to:
run chown gitlab-runner:gitlab-runner on its home directory (also as
pre_clone_script in the TOML file);
add gitlab-runner to the sudoers group;
I added gitlab-runner to the docker group;
a series of file permissions operations, then chmod 777, chgrp with
the runner group and more.

You always should not forget to stop your containers with after_script section.
But in your case, you can use GIT_STRATEGY to clear repository before your job.
variables:
GIT_STRATEGY: none
Your yml file with this fix
image: docker:latest
services:
- docker:18.09.9-dind
stages:
- deploy
step-deploy-prod:
stage: deploy
only:
- master
script:
- docker-compose up -d --build
when: always
environment: master
variables:
GIT_STRATEGY: none

Related

If possible to run a Docker Compose comand before a job exe in GitLab CI

I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs

Best practices for adding .env-File to Docker-Image during build in Gitlab CI

I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via .env-Files. As always advised I don't store the .env-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.
What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding .env-File in it.
On my local machine it's pretty simple, since I have all the different .env-Files in the root-Folder of my app I just use this in my Dockerfile
COPY .env-stage ./.env
and everything is fine.
Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named DOTENV_STAGE of type file with the contents of my local .env-stage file.
Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?
I tried using cp (see below, also in the before_script-section) to just copy the file to an .env-File during the build process, but that obviously doesn't work.
My current build stage looks like this:
image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
This results in
Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
I also tried cp $DOTENV_STAGE .env as well as cp $DOTENV_STAGE $CI_BUILDS_DIR/.env and cp $DOTENV_STAGE $CI_PROJECT_DIR/.env but none of them worked.
So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?
Thanks
You should avoid copying .env file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: env_file.
web:
env_file:
- .env
You can store contents of the .env file itself in a Masked Variable in the GitLabs CI backend. Then dump it to .env file in the runner and feed to Docker compose pipeline.
After some more research I stumbled upon a support-forum entry on gitlab.com, which exactly describes my situation (unfortunately it got deleted in the meanwhile) and it got solved by the same approach I was trying to use, namely this:
...
script:
- cp $DOTENV_STAGE $CI_PROJECT_DIR/.env
...
in my .gitlab-ci.yml
The part I was actually missing was adjusting my .dockerignore-File accordingly (removing .env from it) and then removing the line
COPY .env ./.env
from my Dockerfile
An alternative approach I thought about after joyarjo's answer could be to use a ConfigMap for Kubernetes. But I didn't try it yet

How do I have multiple docker images available in one stage Git lab CI

I have the following .gitlab-ci.yml
stages:
- test
- build
- art
image: golang:1.9.2
variables:
BIN_NAME: example
ARTIFACTS_DIR: artifacts
GO_PROJECT: example
GOPATH: /go
before_script:
- mkdir -p ${GOPATH}/src/${GO_PROJECT}
- mkdir -p ${CI_PROJECT_DIR}/${ARTIFACTS_DIR}
- go get -u github.com/golang/dep/cmd/dep
- cp -r ${CI_PROJECT_DIR}/* ${GOPATH}/src/${GO_PROJECT}/
- cd ${GOPATH}/src/${GO_PROJECT}
test:
stage: test
script:
# Run all tests
go test -run ''
build:
stage: build
script:
# Compile and name the binary as `hello`
- go build -o hello
- pwd
- ls -l hello
# Execute the binary
- ./hello
# Move to gitlab build directory
- mv ./hello ${CI_PROJECT_DIR} artifacts:
paths:
- ./hello
The issue is my program is dependant on both Go and Mysql...
I am aware i can have a different docker image for each stage but my test stage needs both
go test & MySql
What I have looked into:
I have learn ho to create my own docker image based using docker commit and also how to use a docker file to build and image up.
However I have hear there are way to link docker containers together using docker compose, and this seems like a better method...
I have no idea how to go about this in GitLab, I know I need a compose.yml file but not sure where to put it, whats need to go in it, does it create an image that I then link to from my .gitlab-ci.yml file?
Perhaps this is over kill and there is a simpler way?
I understand your tests need a MySQL server in order to work and that you are using some kind of MySQL client or driver in your Go tests.
You can use a Gitlab CI service which will be made available during your test job. GitlabCI will run a MySQL container beside your Go container which will be reachable via it's name from the Go container. For example:
test:
stage: test
services:
- mysql:5.7
variables:
# Configure mysql environment variables (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: mydb
MYSQL_ROOT_PASSWORD: password
script:
# Run all tests
go test -run ''
Will start a MySQL container and it will be reachable from the Go container via hostname mysql. Note you'll need to define variables for MySQL startup as per the Environment variables section of image documentation (such as Root password or database to create)
You can also define the service globally (will be made available for each job in your build) and use an alias so the MySQL server will be reachable from another hostname.

Docker Compose as a CI pipeline

So we use Gitlab CI. The issue was the pain of having to commit each time we want to test wether or not our build pipeline was configured correctly. Unfortunately no way to easily test Gitlab CI locally when our containers/pipeline ain't workin right.
Our solution, use docker-compose.yml as a CI pipeline runner for local testing of containerized build steps, why not ya know . . . ? Basically Gitlab CI, and most others, have each section spawn a container to run a command and won't continue until the preceding steps complete, i.e. the first step must fully complete and then the next step happens.
Here is a simple .gitlab-ci.yml file we use:
stages:
- install
- test
cache:
untracked: true
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
install:
image: node:10.15.3
stage: install
script: npm install
test:
image: node:10.15.3
stage: test
script:
- npm run test
dependencies:
- install
Here is the docker-compose.yml file we converted it to:
version: "3.7"
services:
install:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- install
volumes:
- .:/home/node:Z
test:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- run
- test
volumes:
- .:/home/node:Z
depends_on:
- install
OK, now for the real issue here. The depends_on part of the compose file doesn't wait for the install container to finish, it just waits for the npm command to be running. Therefore, once the npm command is officially loaded up and running, the test container will start running and complain there are no node_modules yet. This happens because npm is running does not mean the npm command has actually finished.
Anyone know any tricks to better control what docker considers to be done. All the solutions I looked into where using some kind of wrapper script which watched some port on the internal docker network to wait for a service, like a db, to be fully turned on and ready.
When using k8s I can setup a readiness probe which is super dope, doesn't seem to be a feature of Docker Compose though. Am I wrong here? Would be nice to just write a command which docker uses to determine what done means.
For now we must run each step manually and then run the next when the preceding step is complete like so:
docker-compose up install
wait ....
docker-compose up test
We really just want to say:
docker-compose up
and have all the steps complete in correct order by waiting for preceding steps.
I went through the same issue, this is a permission related thing when you are mapping from your local machine to docker.
volumes:
- .:/home/node:Z
Create a file inside the container, and check the permission of this same file in your local machine, if you see the root user or anything else is the owner, instead of your current user, you have to run first
export DOCKER_USER="$(id -u):$(id -g)"
and change
user: node
by
user: $DOCKER_USER
PS: I'm assuming you can run docker without having to use sudo, just mentioning this bc this is the scenario I have.
This question was many years ago. I now use this project: https://github.com/firecow/gitlab-ci-local
It runs your Gitlab Pipeline locally using docker just as you would expect it to run.

Integrating docker with gitlab-ci - how does the docker image get built and used?

I was previously using the shell for my gitlab runner to build my project. So far I have set up the pipeline that will run whatever commands I have set in the gitlab-ci.yml file seen below:
gitlab-ci.yml using shell runner
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Now, I want to switch to a docker image. I have reconfigured the runner to use a docker image, and I specified the image in my new gitlab-ci.yml file seen below. I followed the gitlab-ci docker tutorial and this is where it left off so I'm not entirely sure where to go from here:
gitlab-ci.yml using docker runner
image: node:8.10.0
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Questions:
With my current gitlab-ci.yml file, how does this build a docker image/does it even build one? If it does, what does that mean? Currently the pipeline passed, but I have no idea if it did in a docker image or not (am I supposed to be able to tell?).
Also, let's say the docker image was created, ran the tests, and the pipeline passed; it should push the code to a new repository (not included in yml file yet). From what I gathered, the image isn't being pushed, it's just the code, right? So what do I do with this created docker image?
How does the Dockerfile get used? I see no link between the gitlab-ci.yml file and Dockerfile.
Do I need to surround all commands in the gitlab-ci.yml file in docker run <commands> or docker exec <commands>? Without including one of these 2 commands, it seems like it would just run on the server and not in a docker image.
I've seen people specify an image in both the gitlab-ci.yml file and Dockerfile. I have an angular project, and I specified an image of image: node:8.10.0. In the Dockerfile, should I specify the same image? I've seen some projects where they are completely different and I'm wondering what the use of both images are/if picking one image over another will severely impact my builds.
You have to take a different approach on building your app if you want to fully dockerize it. Export angular things into Dockerfile and get docker operations inside your .gitlab-ci instead of angular stuff like here:
stages:
- build
# - release
# - deploy
.build_template: &build_definition
stage: build
image: docker:17.06
services:
- docker:17.06-dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_IMAGE -f $DOCKERFILE ./
- docker push $CONTAINER_IMAGE
build_app_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/app:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/app:latest
DOCKERFILE: ./Dockerfile.app
build_nginx_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/nginx:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/nginx:latest
DOCKERFILE: ./Dockerfile
You can set up a few build jobs - for production, development, staging etc.
Right next to your .gitlab-ci.yaml you can put Dockerfile and Dockerfile.app - Dockerfile.app stands for building you angular app:
FROM node:10.5.0-stretch
RUN mkdir -p /usr/src/app
RUN mkdir -p /usr/src/remote
WORKDIR /usr/src/app
COPY . .
# do your commands here
Now with your app built, it can be served via a web server - it's your choice and a different configuration that follows with each choice - cant even scratch a surface here. That'd be implemented in Dockerfile - we usually use Nginx in our company.
From here on it's about releasing your images and deploying them. I've only specified how to build them in docker as it seems this is what the question is about.
If you want to deploy your image and run it somewhere - choose a provider - AWS, Heroku, own infrastructure - have it your way, but this is far too much to cover all those in a single answer so I'll leave it for another question when you specify where'd you like to deploy your newly built images and how would you like to serve it. In our company, we orchestrate things with Rancher, but there are multiple awesome and competing options in the market.
Edit for a custom registry
The above .gitlab-ci configuration works with Gitlab's "internal" registry only, in case you want to utilize your own registry, change the values accordingly:
#previous configs
script:
- docker login -u mysecretlogin -p mysecretpasswd registry.local.com
# further configs
from -u gitlab-ci-token to your login in the registry,
from $CI_JOB_TOKEN to your password
from $CI_REGISTRY to your registry address
Those values should be stored in Gitlab's CI secret variables and referenced via env variables so that they are not saved in the repository.
Finally, your script might look like below in case you decided to protect these values. Refer to Gitlab's official docs on how to add secret CI variables - super easy task.
#previous configs
script:
- docker login -u $registrylogin -p $registrypasswd $registryaddress
# further configs

Resources