Gitlab Runner with Docker and shell error — Permission denied - docker

Installed a brand new Gitlab CE 13.9.1 on a Ubuntu Server 20.04.2.0.
This is the pipeline
image: node:latest
before_script:
- apt-get update -qq
stages:
- install
install:
stage: install
script:
- npm install --verbose
To run it I configure my Gitlab Runner using the same procedure as in my previous Gitlab CE 12:
I pull last Gitlab runner image:
docker pull gitlab/gitlab-runner:latest
First try:
Start GitLab Runner container mounting on local volume
docker run -d \
--name gitlab-runner \
--restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
And register runner
docker run --rm -t -i \
-v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register
When registering runner, for executor I pick shell
Finally, when I push to Gitlab, on the pipeline, I see this error:
$ apt-get update -qq
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
ERROR: Job failed: exit status 1
Second try:
Start GitLab Runner container mounting on Docker volume
Create volume
docker volume create gitlab-runner-config
Start GitLab Runner container
docker run -d \
--name gitlab-runner \
--restart always \
-v gitlab-runner-config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Register runner (picking shell again as executor)
docker run \
--rm -t -i \
-v gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner register
Same results.
$ apt-get update -qq
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
ERROR: Job failed: exit status 1
Third try:
Granting permissions to gitlab-runner
I ended up reading In gitlab CI the gitlab runner choose wrong executor and https://docs.gitlab.com/runner/executors/shell.html#running-as-unprivileged-user, which states these solutions:
move to docker
grant user gitlab-runner the permissions he needs to run specified commands. gitlab-runner may run apt-get without sudo, also he will need perms for npm install and npm run.
grant sudo nopasswd to user gitlab-runner. Add gitlab-runner ALL=(ALL) NOPASSWD: ALL (or similar) to /etc/sudoers on the machine gitlab-runner is installed and change the lines apt-get update to sudo apt-get update, which will execute them as privileged user (root).
I need to use shell
I already did that with sudo usermod -aG docker gitlab-runner
Tried as well with sudo nano /etc/sudoers, adding gitlab-runner ALL=(ALL) NOPASSWD: ALL, and using sudo apt-get update -qq in the pipeline, which results in bash: line 106: sudo: command not found
I'm pretty lost here now. Any idea will be welcome.

IMHO, using shell executor on a Docker runner with already mounted Docker socket on it is not a good idea. You'd better use docker executor, which will take care of everything and probably is how it's supposed to be run.
Edit
Alternatively, you can use a customized Docker image to allow using the shell executor with root permissions. First, you'll need to create a Dockerfile:
FROM gitlab/gitlab-runner:latest
# Change user to root
USER root
Then, you'll have to build the image (here, I tagged it as custom-gitlab-runner):
$ docker build -t custom-gitlab-runner .
Finally, you'll need to use this image:
docker run -d \
--name gitlab-runner \
--restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
custom-gitlab-runner:latest

I had a similar issue trying to use locally installed gitlab-runner on ubuntu with a shell executor (I had other issues using docker executor not being able to communicate between stages)
$ docker build -t myapp .
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=myapp&target=&ulimits=null&version=1": dial unix /var/run/docker.sock: connect: permission denied
ERROR: Job failed: exit status 1
I then printed what user was running the docker command within the gitlab-ci.yml file, which was gitlab-runner
...
build:
script:
- echo $USER
- docker build -t myapp .
...
I then added gitlab-runner to the docker group using
sudo usermod -aG docker gitlab-runner
which fixed my issue. No more docker permission errors.

Related

Docker file not found error inside the container to create a new image

I need to create a container for which I'm able to create new images.
My first guest was to run docker on docker but found that the right
way to do this was using the --privileged argument so the container
has access to the docker daemon.
For this I'm runnin the following comand:
docker run --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /home/user/container_data:/app/app -d -p 5100:5100 mcf2:latest
I'm using -v /home/user/container_data:/app/app because I'm creating the folder for the new images from
templates for flask apps and saving them on that directory.
One of the files I'm creating from the templates is 'create_image.sh' which has the docker build statement E.G.
'docker build -t new_container:latest .'
for that I'm running the following code inside the running container:
bash_path= 'app/classification_model/create_image.sh'
subprocess.call([bash_path],shell=True)
But I always get this error:
/bin/sh: 1: app/model/create_image.sh: docker: not found
But the file does exist, if do ls in the container 'app/' is in the list of folders
I have also checked the bind directory and
'/home/user/container_data/classification_model/create_image.sh'
Does exist.
I have tried changing bash_path to
bash_path= '/app/classification_model/create_image.sh'
and
bash_path= '/app/app/classification_model/create_image.sh'
But get the same error for all the cases
**EDIT: **
I have changed the Docker file to:
From docker:dind
FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
...
...
And run again:
docker run --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /home/user/container_data:/app/app -d -p 5100:5100 mcf2:latest
I'm still getting the same error:
/bin/sh: 1: docker: not found
You are mixing two thing
Docker in Docker
Docker in Docker with host Docker Socket
In the both cases, Docker should be installed in the container, it does not mean by mounting -v /var/run/docker.sock:/var/run/docker.sock this any container will able to launch or run docker command.
In the first option, it will start containers as a child container.
In the second option, the container will have access to the Docker socket, and will, therefore, be able to start containers. Except that instead of starting “child” containers, it will start “sibling” containers.
updated:
Docker offical dind image is alpine based so you can install using apk instead of apt.
FROM docker:dind
RUN apk add --no-cache python3 python3-dev
https://pkgs.alpinelinux.org/packages

Unable to find Jenkins config files inside docker container

I have used Jenkins docker image from dockerhub(https://github.com/jenkinsci/docker)
FROM jenkins/jenkins:lts
USER root
ENV http_proxy http://bc-proxy-vip.de.pri.o2.com:8080
ENV https_proxy http://bc-proxy-vip.de.pri.o2.com:8080
RUN apt-get update
RUN apt-get install -y ldap-utils curl wget vim nano sudo
RUN adduser jenkins sudo
User jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
EXPOSE 8080
EXPOSE 50000
The docker build command was executed successfully and container also started successfully.
Docker build command :
docker build --no-cache -t myjenkins .
Docker container command :
docker run --net=host --name=my_jenkins -d -p 8080:8080 -p 50000:50000 myjenkins
Then I logged in to the container via docker run -it myjenkins bash. I'm unable to find jenkins config files like config.xml, jenkins.xml etc.
I know this is an old issue, but I ran into this recently myself and found that when you run the containerized version of Jenkins, the configuration files are stored in:
/var/jenkins_home
A lot of people seem to be suggesting they're in /etc/sysconfig/jenkins for other Jenkins installs.

How to install docker in docker container?

This is my Dockerfile:
FROM golang
# RUN cat /etc/*release
RUN apt-get update
RUN apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update
RUN apt-get -y install docker-ce
RUN docker run hello-world
The golang Dockerfile is official, it bases on the
Debian GNU/Linux 8 (jessie)
So I wrote down this Dockerfile by checking the install steps from Docker Install Tutor(Debian)
But the output is
Step 8/8 : RUN docker run hello-world
---> Running in b183b8cc5d10
docker: Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
How to solve this problem? I want to establish docker containers in the host docker container.
I had a similar problem trying to install Docker inside a Bamboo Server image. To solve this:
first remove the line: RUN docker run hello-world from your Dockerfile
The simplest way is to just expose the Docker socket, by bind-mounting it with the -v flag or mounting a volume using Docker Compose:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Use Docker-in-Docker for this task. They have already solved many of the problems for you.
In your .dockerfile add this line to install Docker:
RUN curl -fsSL https://get.docker.com | sh
After build is done, when running your container, add a volume mapping to the host Docker socket with the -v switch , e.g.:
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock my-container
Then, from within the container shell, check the connection by running:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bf420851572 my-image "bash" 8 minutes ago Up 8 minutes my-container
The easiest way is to use the official Docker-in-Docker images from https://hub.docker.com/_/docker/ with the :dind tag (which is the successor of the project Hendrikvh already mentioned).
You definitely need to use the --priviledged flag also:
docker run --privileged --name yourDockerContainerNameHere -d docker:dind
With that your Docker-in-Docker experiments should work - but be aware of the many stumbleblocks that could be in your way: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
//create container in privileged mode
sudo docker container run -it --name uob_20.04 --privileged=true <dockerhub-image> /bin/bash
//give access
sudo chmod ugo+rw /var/run/docker.sock
sudo nohup dockerd > /dev/null 2>&1 &
//check docker installation
docker images
Try with starting docker service before of executing any docker command.
Add this line
RUN bash service docker start
to your Dockerfile above of this line:
RUN docker run hello-world

docker inside docker container

I want to install docker inside a running docker container.
docker run -it centos:centos7
My base container is using centos, I can login to running container using docker exec. But when I try to install docker inside it using yum install -y docker it installs.
But somehow I can't start the docker service with docker -d &, it gives me error as:
INFO[0000] Option DefaultNetwork: bridge
WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: Error initializing bridge driver: Setup IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system
Is there a way I can install docker inside docker container or build image already having running docker? I have already seen these examples but none works for me.
The output of uname -r on the host machine:
[fedora# ~]$ uname -r
4.2.6-200.fc22.x86_64
Any help would be appreciated.
Thanks in advance
Update
Thanks to https://stackoverflow.com/a/38016704/372019 I want to show another approach.
Instead of mounting the host's docker binary, you should copy or install a container specific release of the docker binary. Since you're only using it in a client mode, you won't need to install it as a system service. You still need to mount the Docker socket into the container so that you can easily communicate with the host's Docker engine.
Assuming that you got a base image with a working Docker binary (e.g. the official docker image), the example now looks like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
docker:1.12 docker info
Without actually answering your question I'd suggest you to read Using Docker-in-Docker for your CI or testing environment? Think twice.
It explains why running docker-in-docker should be replaced with a setup where Docker containers run as siblings of the "outer" or "base" container. The article also links to the original https://github.com/jpetazzo/dind project where you can find working examples how to run Docker in Docker - in case you still want to have docker-in-docker.
An example how to enable a container to access the host's Docker daemon look like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
If you are on Mac with Docker toolbox.
The below command WON’T WORK
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
Because /var/run/docker.sock will not be on your OSX filesystem
the Docker daemon is running inside the boot2docker VM - and that's where the unix socket is.
So you have to run the container from boot2docker VM
$ docker-machine ssh default
$ docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v $(which docker):/usr/bin/docker\
busybox:latest /usr/bin/docker info
$ exit
This looks like Docker-in-Docker, feels like Docker-in-Docker, but it’s not Docker-in-Docker, when this container will create more containers, those containers will be created in the top-level Docker.
You need the --privileged parameter.
By default, Docker containers are “unprivileged” and cannot, for
example, run a Docker daemon inside a Docker container.
Source
Run your base image with the command docker run --privileged -it centos:centos7 bash. Then you may install and run another docker container inside that container.
I`ve a similar problems in my vms.
I`ve solve the problem with change the storage file system from image to vfs(in daemon.json file)
like the image bellow
For image works first create a base image, in my case with centos7
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
with this image builded (in my case i called local/c7-systemd) create a second image, installing docker and moving daemon.json to inside.
FROM local/c7-systemd
RUN yum install -y yum-utils
RUN yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
RUN yum install -y docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
COPY daemon.json /etc/docker/daemon.json
RUN yum install -y nano
RUN systemctl enable docker
EXPOSE 80
EXPOSE 8080
EXPOSE 8161
EXPOSE 6379
EXPOSE 8761
CMD ["/usr/sbin/init"]
enjoy!

Jenkins in docker with access to host docker

I have a workflow as follows for publishing webapps to my dev server. The server has a single docker host and I'm using docker-compose for managing containers.
Push changes in my app to a private gitlab (running in docker). The app includes a Dockerfile and docker-compose.yml
Gitlab triggers a jenkins build (jenkins is also running in docker), which does some normal build stuff (e.g. run test)
Jenkins then needs to build a new docker image and deploy it using docker-compose.
The problem I have is in step 3. The way I have it set up, the jenkins container has access to the host docker so that running any docker command in the build script is essentially the same as running it on the host. This is done using the following DockerFile for jenkins:
FROM jenkins
USER root
# Give jenkins access to docker
RUN groupadd -g 997 docker
RUN gpasswd -a jenkins docker
# Install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
USER jenkins
and mapping the following volumes to the jenkins container:
-v /var/run/docker.sock:/var/run/docker.sock
-v /usr/bin/docker:/usr/bin/docker
A typical build script in jenkins looks something like this:
docker-compose build
docker-compose up
This works ok, but there are two problems:
It really feels like a hack. But the only other options I've found is to use the docker plugin for jenkins, publish to a registry and then have some way of letting the host know it needs to restart. This is quite a lot more moving parts, and the docker-jenkins plugin required that the docker host is on an open port, which I don't really want to expose.
The jenkins DockerFile includes groupadd -g 997 docker which is needed to give the jenkins user access to docker. However, the GID (997) is the GID on the host machine, and is therefore not portable.
I'm not really sure what solution I'm looking for. I can't see any practical way to get around this approach, but it would be nice if there was a way to allow running docker commands inside the jenkins container without having to hard code the GID in the DockerFile. Does anyone have any suggestions about this?
My previous answer was more generic, telling how you can modify the GID inside the container at runtime. Now, by coincidence, someone from my close colleagues asked for a jenkins instance that can do docker development so I created this:
FROM bdruemen/jenkins-uid-from-volume
RUN apt-get -yqq update && apt-get -yqq install docker.io && usermod -g docker jenkins
VOLUME /var/run/docker.sock
ENTRYPOINT groupmod -g $(stat -c "%g" /var/run/docker.sock) docker && usermod -u $(stat -c "%u" /var/jenkins_home) jenkins && gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh
(The parent Dockerfile is the same one I have described in my answer to: Changing the user's uid in a pre-build docker container (jenkins))
To use it, mount both, jenkins_home and docker.sock.
docker run -d /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock <IMAGE>
The jenkins process in the container will have the same UID as the mounted host directory. Assuming the docker socket is accessible to the docker group on the host, there is a group created in the container, also named docker, with the same GID.
I ran into the same issues. I ended up giving Jenkins passwordless sudo privileges because of the GID problem. I wrote more about this here: https://blog.container-solutions.com/running-docker-in-jenkins-in-docker
This doesn't really affect security as having docker privileges is effectively equivalent to sudo rights.
Please take a look at this docker file I just posted:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/gid-from-volume/Dockerfile
Here the GID extracted from a mounted volume (host directory), with
stat -c '%g' <VOLUME-PATH>
Then the GID of the group of the container user is changed to the same value with
groupmod -g <GID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real GID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the GID, there might be other files in the container no longer accessible for the process, so you might need a
chgrp -R <GROUPNAME> <SOME-PATH>
before the gosu command.
You can also change the UID, see my answer here Changing the user's uid in a pre-build docker container (jenkins)
and maybe you want to change both to increase security.
I solved a similar problem in the following way.
Docker is installed on the host. Jenkins is deployed in the docker container of the host. Jenkins must build and run containers with web applications on the host.
Jenkins master connects to the docker host using REST APIs. So we need to enable the remote API for our docker host.
Log in to the host and open the docker service file /lib/systemd/system/docker.service. Search for ExecStart and replace that line with the following.
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
Reload and restart docker service
sudo systemctl daemon-reload
sudo service docker restart
Docker file for Jenkins
FROM jenkins/jenkins:lts
USER root
# Install the latest Docker CE binaries and add user `jenkins` to the docker group
RUN apt-get update
RUN apt-get -y --no-install-recommends install apt-transport-https \
apt-utils ca-certificates curl gnupg2 software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
RUN apt-get update && apt-get install -y docker-ce-cli docker-ce && \
apt-get clean && \
usermod -aG docker jenkins
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.6 docker-workflow:1.29 ansicolor"
Build jenkins docker image
docker build -t you-jenkins-name .
Run Jenkins
docker run --name you-jenkins-name --restart=on-failure --detach \
--network jenkins \
--env DOCKER_HOST=tcp://172.17.0.1:4243 \
--publish 8080:8080 --publish 50000:50000 \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
you-jenkins-name
Your web application has a repository at the root of which is jenkins and a docker file.
Jenkinsfile for web app:
pipeline {
agent any
environment {
PRODUCT = 'web-app'
HTTP_PORT = 8082
DEVICE_CONF_HOST_PATH = '/var/web-app'
}
options {
ansiColor('xterm')
skipDefaultCheckout()
}
stages {
stage('Checkout') {
steps {
script {
//BRANCH_NAME = env.CHANGE_BRANCH ? env.CHANGE_BRANCH : env.BRANCH_NAME
deleteDir()
//git url: "git#<host>:<org>/${env.PRODUCT}.git", branch: BRANCH_NAME
}
checkout scm
}
}
stage('Stop and remove old') {
steps {
script {
try {
sh "docker stop ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker rm ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker image rm ${env.PRODUCT}"
} catch (Exception e) {}
}
}
}
stage('Build') {
steps {
sh "docker build . -t ${env.PRODUCT}"
}
}
// ④ Run the test using the built docker image
stage('Run new') {
steps {
script {
sh """docker run
--detach
--name ${env.PRODUCT} \
--publish ${env.HTTP_PORT}:8080 \
--volume ${env.DEVICE_CONF_HOST_PATH}:/var/web-app \
${env.PRODUCT} """
}
}
}
}
}

Resources