Docker Rootless in Docker Rootless, It's possble? - docker

For my job, I would like to run Jenkins and Docker Rootless (with the sysbox runtime only for this container), all in Docker Rootless.
I would like this because I need a secure environment given I don't inspect Jenkins pipelines
But when I run docker rootless in docker rootless, I get this error:
[rootlesskit:parent] error: failed to setup UID/GID map: newuidmap 54 [0 1000 1 1 100000 65536] failed: newuidmap: write to uid_map failed: Operation not permitted
: exit status 1
I tried many actions but failed to get it to work. Someone would have a solution to do this, please?
Thanks for reading me, have a nice day!
Edit 1
Hello, I take the liberty of relaunching this question, because being essential for the safety of our environment, my bosses remind me every day. Would someone have the answer to this problem please

Things getting a little tricky when you want to use the docker build command inside a Jenkins container.
I stumbled upon this issue when wanted to build docker images without being root, under the user 'jenkins' instead.
I wrote the solution in an article in which I explain in detail what is happening under the hood.
The key point is to figure out which GID the docker.sock socket is running under (depends on the system). So here is what you gotta do:
Run the command:
$ stat /var/run/docker.sock
Output:
jenkins#wsl-ubuntu:~$ stat /var/run/docker.sock
File: /var/run/docker.sock
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 17h/23d Inode: 552 Links: 1
Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1001/ docker)
Access: 2021-03-03 10:43:05.570000000 +0200
Modify: 2021-03-03 10:43:05.570000000 +0200
Change: 2021-03-03 10:43:05.570000000 +0200
Birth: -
In this case, the GID is 1001, but can also be 999 or something else in your machine.
Now, create a Dockerfile and paste the code below replacing the ENV variable with your own from the stat command output above:
FROM jenkins/jenkins:lts-alpine
USER root
ARG DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
ARG JAVA_OPTS=""
ENV DOCKER_HOST_GID $DOCKER_HOST_GID
ENV JAVA_OPTS $JAVA_OPTS
RUN set -eux \
&& apk --no-cache update \
&& apk --no-cache upgrade --available \
&& apk --no-cache add shadow \
&& apk --no-cache add docker curl --repository http://dl-cdn.alpinelinux.org/alpine/latest-stable/community \
&& deluser --remove-home jenkins \
&& addgroup -S jenkins -g $DOCKER_HOST_GID \
&& adduser -S -G jenkins -u $DOCKER_HOST_GID jenkins \
&& usermod -aG docker jenkins \
&& apk del shadow curl
USER jenkins
WORKDIR $JENKINS_HOME
For the sake of a working example, here is a docker-compose file:
version: '3.3'
services:
jenkins:
image: jenkins_master
container_name: jenkins_master
hostname: jenkins_master
restart: unless-stopped
env_file:
- jenkins.env
build:
context: .
cpus: 2
mem_limit: 1024m
mem_reservation: 800M
ports:
- 8090:8080
- 50010:50000
- 2375:2376
volumes:
- ./jenkins_data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- default
volumes:
jenkins_data: {}
networks:
default:
driver: bridge
Now lets create the ENV variables:
cat > jenkins.env <<EOF
DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
JAVA_OPTS=-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
EOF
and lastly, run the command docker-compose up -d.
It will build the image, and run it.
Then visit HTTP://host_machine_ip:8090 , and that's all.
If you run docker inspect --format '{{ index (index .Config.Env) }}' jenkins_master you will see that the 1st and 2nd variables are the ones we set.
More details can be found here: How to run rootless docker in dockerized Jenkins installation

Related

docker-compose : Scaling containers with distinct host volume map

Here, I deployed 2 containers with --scale flag
docker-compose up -d --scale gitlab-runner=2
2.Two containers are being deployed with names scalecontainer_gitlab-runner_1 and scalecontainer_gitlab-runner_2 resp.
I want to map different volume for each container.
/srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
Getting this error:
WARNING: The DOCKER_SCALE_NUM variable is not set. Defaulting to a blank string.
Is there any way, I can map different volume for separate container .
services:
gitlab-runner:
image: "gitlab/gitlab-runner:latest"
restart: unless-stopped
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- /srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
version: "3.5"
I don't think you can, there's an open request on this here. Here I will try to describe an alternative method for getting what you want.
Try creating a symbolic link from within the container that links to the directory you want. You can determine the "number" of the container after it's constructed by reading the container name from docker API and taking the final segment. To do this you have to mount the docker socket into the container, which has big security implications.
Setup
Here is a simple script to get the number of the container (Credit Tony Guo).
get-name.sh
DOCKERINFO=$(curl -s --unix-socket /run/docker.sock http://docker/containers/$HOSTNAME/json)
ID=$(python3 -c "import sys, json; print(json.loads(sys.argv[1])[\"Name\"].split(\"_\")[-1])" "$DOCKERINFO")
echo "$ID"
Then we have a simple entrypoint file which gets the container number, creates the specific config directory if it doesn't exist, and links its specific config directory to a known location (/etc/config in this example).
entrypoint.sh
#!/bin/sh
# Get the number of this container
NAME=$(get-name)
CONFIG_DIR="/config/config_${NAME}"
# Create a config dir for this container if none exists
mkdir -p "$CONFIG_DIR"
# Create a sym link from a well known location to our individual config dir
ln -s "$CONFIG_DIR" /etc/config
exec "$#"
Next we have a Dockerfile to build our image, we need to set the entrypoint and install curl and python for it to work. Also copy in our get-name.sh script.
Dockerfile
FROM alpine
COPY entrypoint.sh entrypoint.sh
COPY get-name.sh /usr/bin/get-name
RUN apk update && \
apk add \
curl \
python3 \
&& \
chmod +x entrypoint.sh /usr/bin/get-name
ENTRYPOINT ["/entrypoint.sh"]
Last, a simple compose file that specifies our service. Note that the docker socket is mounted, as well as ./config which is where our different config directories go.
docker-compose.yml
version: '3'
services:
app:
build: .
command: tail -f
volumes:
- /run/docker.sock:/run/docker.sock:ro
- ./config:/config
Example
# Start the stack
$ docker-compose up -d --scale app=3
Starting volume-per-scaled-container_app_1 ... done
Starting volume-per-scaled-container_app_2 ... done
Creating volume-per-scaled-container_app_3 ... done
# Check config directory on our host, 3 new directories were created.
$ ls config/
config_1 config_2 config_3
# Check the /etc/config directory in container 1, see that it links to the config_1 directory
$ docker exec volume-per-scaled-container_app_1 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_1
# Container 2
$ docker exec volume-per-scaled-container_app_2 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_2
# Container 3
$ docker exec volume-per-scaled-container_app_3 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_3
Notes
I think gitlab/gitlab-runner has its own entrypoint file so you may need to chain them.
You'll need to adapt this example to your specific setup/locations.

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

How to pass argument for a configuration file in JuPyterhub's deployment?

I want to install envkey in my docker image which requires a key-value pair. I have the key-value pair with me but I am unable to figure out as to how do I install it in my docker image using those arguments and then deploy the same on jupyterhub.
I tried reading other deployments of mine which use envkey. Here is how it goes:
1. I have a Makefile and I run the command sudo make dev config=aviral.cfg
2. The dev command in the Makefile is as follows:
dev:
docker build -t $(IMAGE) -f Dockerfile.dev . && docker tag $(IMAGE) $(ALIAS)
#echo "\nCreate docker container.."
CONFIG=$(config) IMAGE=$(IMAGE) docker-compose -f docker-compose.yml up -d --scale test=0 --scale airflow_worker=0
#echo "\n$(GREEN)Done.$(NO_COLOR)\n"
#echo "Try airflow at http://localhost:8080."
#echo "and flower at http://localhost:5555."
The docker-compose file is:
airflow_worker:
image: ${IMAGE}:latest
restart: always
depends_on:
- airflow_scheduler
# ports:
# - 8793:8793
# environment:
# - GOOGLE_APPLICATION_CREDENTIALS=/gcloud/cloud.json
env_file:
- ${CONFIG}
command: worker
As you can see, the env_file is passed on.
I am unable to deduce how to do this same in the JuPyterHub.
The helm chart is here(https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz). And my config is:
proxy:
secretToken: "yada_yada"
singleuser:
image:
name: yada_yada.dkr.ecr.ap-south-1.amazonaws.com/demo
tag: 12h
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
imagePullSecret:
enabled: true
registry: yada_yada.dkr.ecr.ap-south-1.amazonaws.com
username: aws
email: aviral#yada_yada.com
password: yada_yada
In my config file, I pass variables as:
ENVKEY=my_personal_envkey
I expect to have my configs passed in the docker, or perhaps I write a proper Makefile for this stuff, as of now, I am facing this error:
Step 19/32 : RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
---> Running in 35bc1cf0e1c8
envkey-source 1.2.9 Quick Install
Copyright (c) 2019 Envkey Inc. - MIT License
https://github.com/envkey/envkey-source
Downloading envkey-source binary for linux-amd64
Downloading tarball from https://github.com/envkey/envkey-source/releases/download/v1.2.9/envkey-source_1.2.9_linux_amd64.tar.gz
envkey-source is installed in /usr/local/bin
Installation complete. Info:
bash: line 97: 29 Segmentation fault envkey-source -h
The command '/bin/sh -c curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash' returned a non-zero code: 139
Although this question alone should be good enough to give you the picture but for the sake of context(if), here are some of the questions:
1. How do I make jupyter-hub access my private docker image repository?
2. Unable to run a lifecycle command from config.yaml while deploying jupyterhub
3. How to have file written automatically in the startup folder when a new user signs up/in on JuPyter hub?
Probably you get this error because install.sh script tries to add envkey-source binary under /usr/local/bin directory and then tries to run envkey-source -h and fails. Check if user(if non-root) have permission to do that or if /usr/local/bin directory exists in container image.
Hope it helps!

Cross-OS compatible way to map user in docker

Introduction
I am setting up a project where we try to use docker for everything.
It's php(symfony) + npm project. We have working and battle-tested (we are using this setup for more than a year on several projects) docker-compose.yaml.
But to make it for developers more friendly, I came up with setting up bin-docker folder, that is, using direnv, placed first in the user's PATH
/.envrc:
export PATH="$(pwd)/bin-docker:$PATH"
Folder contains files, that are supposed to replace bin files with the in-docker ones
❯ tree bin-docker
bin-docker
├── _tty.sh
├── composer
├── npm
├── php
└── php-xdebug
E.g.php file contains:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
if [ $(docker-compose ps php | grep Up | wc -l) -gt 0 ]; then
docker_compose_exec \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php php "$#"
else
docker_compose_run \
--entrypoint=/usr/local/bin/php \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php "$#"
fi
npm:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
docker_run --init \
--entrypoint=npm \
-v "$PROJECT_ROOT":"$PROJECT_ROOT" \
-w "$(pwd)" \
-u "$(id -u "$USER"):$(id -g "$USER")" \
mangoweb/mango-cli:v2.3.2 "$#"
It works great, you can simply use symfony's bin/console and it will "magically" run in the docker-container.
The problem
The only problem and my question is, how to properly map host user to container's user. Properly for all major OS (macOS, Windows(WSL), Linux) because our developers use all of them. I will talk about the npm, because it uses public image anyone can download.
When I do not map user at all, on Linux the files create in mounted volume are the owned by root, and users have to chmod the files afterwards. Not ideal at all.
When I use -u "$(id -u "$USER"):$(id -g "$USER")" it break's because the in-container user now doesn't have rights to create cache folder in container, also on macOS standard UID is 501, which breaks everything.
What is the proper way to map the user, or is there any other better way to do any part of this setup?
Attachments:
docker-compose.yaml: (It's shortened from sensitive or non-important info)
version: '2.4'
x-php-service-base: &php-service-base
restart: on-failure
depends_on:
- redis
- elasticsearch
working_dir: /src
volumes:
- .:/src:cached
environment:
APP_ENV: dev
SESSION_STORE_URI: tcp://redis:6379
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
environment:
discovery.type: single-node
xpack.security.enabled: "false"
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
redis:
image: redis:4.0.8-alpine
php:
<<: *php-service-base
image: custom-php-image:7.2
php-xdebug:
<<: *php-service-base
image: custom-php-image-with-xdebug:7.2
nginx:
image: custom-nginx-image
restart: on-failure
depends_on:
- php
- php-xdebug
_tty.sh: Only to properly pass tty status to docker run
if [ -t 1 ]; then
DC_INTERACTIVITY=""
else
DC_INTERACTIVITY="-T"
fi
function docker_run {
if [ -t 1 ]; then
docker run --rm --interactive --tty=true "$#"
else
docker run --rm --interactive --tty=false "$#"
fi
}
function docker_compose_run {
docker-compose run --rm $DC_INTERACTIVITY "$#"
}
function docker_compose_exec {
docker-compose exec $DC_INTERACTIVITY "$#"
}
This may answer your problem.
I came across a tutorial as to how to do setup user namespaces in Ubuntu. Note the use case in the tutorial is for using nvidia-docker and restricting permissions. In particular Dr. Kinghorn states in hist post:
The main idea of a user-namespace is that a processes UID (user ID) and GID (group ID) can be different inside and outside of a containers namespace. The significant consequence of this is that a container can have it's root process mapped to a non-privileged user ID on the host.
Which sounds like what you're looking for. Hope this helps.

Docker container running Bind9 - Logs files remains empty

I have a Docker container running Bind9.
Inside the container named is running with bind user
bind 1 0 0 19:23 ? 00:00:00 /usr/sbin/named -u bind -g
In my named.conf.local I have
channel queries_log {
file "/var/log/bind/queries.log";
print-time yes;
print-category yes;
print-severity yes;
severity info;
};
category queries { queries_log; };
After starting the container, the log file is created
-rw-r--r-- 1 bind bind 0 Nov 14 19:23 queries.log
but it always remains empty.
On the other side, the 'queries' logs are still visible using docker logs ...
14-Nov-2018 19:26:10.463 client #0x7f179c10ece0 ...
Using the same config without Docker works fine.
My docker-compose.yml
version: '3.6'
services:
bind9:
build: .
image: bind9:1.9.11.3
container_name: bind9
ports:
- "53:53/udp"
- "53:53/tcp"
volumes:
- ./config/named.conf.options:/etc/bind/named.conf.options
- ./config/named.conf.local:/etc/bind/named.conf.local
My Dockerfile
FROM ubuntu:18.04
ENV BIND_USER=bind \
BIND_VERSION=1:9.11.3
RUN apt-get update -qq \
&& DEBIAN_FRONTEND=noninteractive apt-get --no-install-recommends install -y \
bind9=${BIND_VERSION}* \
bind9-host=${BIND_VERSION}* \
dnsutils \
&& rm -rf /var/lib/apt/lists/*
COPY entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
CMD ["/usr/sbin/named"]
-f
Run the server in the foreground (i.e. do not daemonize).
-g
Run the server in the foreground and force all logging to stderr.
Try to use -f instead of -g.

Resources