Restarting Docker daemon on host node from within Kubernetes pod - docker

Goal: Restart Docker daemon on GKE
Issue: Cannot connect to bus
Background
While on Google Kubernetes Engine (GKE), I am attempting to restart the host node's Docker daemon in order to enable the Nvidia GPU Telemetry for Kubernetes on nodes that have a GPU. I have correctly isolated just the GPU nodes properly, and I am able to run every command on the host node by having a DaemonSet run an initContainer following the Automatically bootstrapping Kubernetes Engine nodes with daemonSets guide.
During runtime, however, the following pod does not allow me to connect to the Docker daemon:
apiVersion: v1
kind: Pod
metadata:
name: debug
namespace: gpu-monitoring
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-accelerator
operator: Exists
containers:
- command:
- sleep
- "86400"
env:
- name: ROOT_MOUNT_DIR
value: /root
image: docker.io/ubuntu:18.04
imagePullPolicy: IfNotPresent
name: node-initializer
securityContext:
privileged: true
volumeMounts:
- mountPath: /root
name: root
- mountPath: /scripts
name: entrypoint
- mountPath: /run
name: run
volumes:
- hostPath:
path: /
type: ""
name: root
- configMap:
defaultMode: 484
name: nvidia-container-toolkit-installer-entrypoint
name: entrypoint
- hostPath:
path: /run
type: ""
name: run
The user is 0, while the users present in /run/user are 1003, and 1002.
In order to verify connectivity and interactions with the root Kubernetes (k8s) node, the following is run:
root#debug:/# chroot "${ROOT_MOUNT_DIR}" ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 226124 9816 ? Ss Oct13 0:27 /sbin/init
The Issues
Both images
When attempting to interact with the underlying Kubernetes (k8s) node to restart the Docker daemon, I get the following:
root#debug:/# ls /run/dbus
system_bus_socket
root#debug:/# ROOT_MOUNT_DIR="${ROOT_MOUNT_DIR:-/root}"
root#debug:/# chroot "${ROOT_MOUNT_DIR}" systemctl status docker
Failed to connect to bus: No data available
When attempting to start dbus on the host node:
root#debug:/# export XDG_RUNTIME_DIR=/run/user/`id -u`
root#debug:/# export DBUS_SESSION_BUS_ADDRESS="unix:path=${XDG_RUNTIME_DIR}/bus"
root#debug:/# chroot "${ROOT_MOUNT_DIR}" /etc/init.d/dbus start
Failed to connect to bus: No data available
Image: solita/ubuntu-systemd
When trying to run commands using the same k8s pod config, except inside the solita/ubuntu-systemd image, the following are the results:
root#debug:/# /etc/init.d/dbus start
[....] Starting dbus (via systemctl): dbus.serviceRunning in chroot, ignoring request: start
. ok
Configuration Variations Attempted
I have tried to change the following, in pretty much every combination, to no avail:
Image to docker.io/solita/ubuntu-systemd:18.04
Add shareProcessNamespace: true
Add the following mounts: /dev, /proc, /sys
Restrict /run to /run/dbus, and /run/systemd

So the answer is a weird workaround that was not fully expected. In order to restart the Docker daemon, first punch a firewall hole for pods to connect to the host node. Next, use gcloud compute ssh, and ssh into the node and restart via a remote ssh command:
apt-get update
apt-get install -y \
apt-transport-https \
curl \
gnupg \
lsb-release \
ssh
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt-get update
apt-get install -y google-cloud-sdk
CLUSTER_NAME="$(curl -sS http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google")"
NODE_NAME="$(curl -sS http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')"
FULL_ZONE="$(curl -sS http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google' | awk -F "/" '{print $4}')"
MAIN_ZONE=$(echo $FULL_ZONE | sed 's/\(.*\)-.*/\1/')
gcloud compute ssh \
--internal-ip $NODE_NAME \
--zone=$FULL_ZONE \
-- "sudo systemctl restart docker"

Related

Run shell script or custom data on AKS node pool via terraform

I would like to run shell script or custom data on AKS node pool via terraform script. I ran shell script via custom data on VMSS (Virtual machine scale set) through terraform.Similarly I would like to run the same shell script via AKS node pool. I searched many link and ways but couldn't get any solution for this. Is there any way or recommended this? Appreciate your help.I have been trying for this solution since a month but couldn't get proper solution.
I have got my solution via deamonset and configmap with nodeinstaller.
Below links really helped me but not through terraform as AKS won't support custom script to automate via terraform.(Hi can I have a custom script to be executed in AKS node group?)
Reference links: https://medium.com/#patnaikshekhar/initialize-your-aks-nodes-with-daemonsets-679fa81fd20e
https://github.com/patnaikshekhar/AKSNodeInstaller
daemonset.yml
apiVersion: v1
kind: Namespace
metadata:
name: node-installer
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: installer
namespace: node-installer
spec:
selector:
matchLabels:
job: installer
template:
metadata:
labels:
job: installer
spec:
hostPID: true
restartPolicy: Always
containers:
- image: patnaikshekhar/node-installer:1.3
name: installer
securityContext:
privileged: true
volumeMounts:
- name: install-script
mountPath: /tmp
- name: host-mount
mountPath: /host
volumes:
- name: install-script
configMap:
name: sample-installer-config
- name: host-mount
hostPath:
path: /tmp/install
sampleconfigmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-installer-config
namespace: node-installer
data:
install.sh: |
#!/bin/bash
# install newrelic-infra
echo "license_key: #{NEW_RELIC_LICENSE_KEY}#" | sudo tee -a /etc/newrelic-infra.yml
echo "enabled: #{NEW_RELIC_INFRA_AGENT_ENABLED}#" | sudo tee -a /etc/newrelic-infra.yml
curl -s https://download.newrelic.com/infrastructure_agent/gpg/newrelic-infra.gpg | sudo apt-key add -
printf "deb https://download.newrelic.com/infrastructure_agent/linux/apt bionic main" | sudo tee -a /etc/apt/sources.list.d/newrelic-infra.list
sudo apt-get update -y
sudo apt-get install newrelic-infra -y
sudo systemctl status newrelic-infra
echo "Newrelic infra agent installation is done"
# enable log forwarding
echo "logs:" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
echo " - name: log-files-in-folder" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
echo " file: /var/log/onefc/*/*.newrelic.log" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
echo " max_line_kb: 256" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
# trigger log forwarding
sudo newrelic-infra-ctl

Newrelic infra agent on AKS Node pool

I would like to install and configure newrelic infra agent on AKS node pool. I have explored links but couldn't find any helpful. Could anyone help to start working on this?
here is the helm chart that deploys the New Relic Infrastructure agent as a Daemonset.
I have got my solution via deamonset and configmap with nodeinstaller. Below links really helped me but not through terraform as AKS won't support custom script to automate via terraform.(Hi can I have a custom script to be executed in AKS node group?)
Reference links: https://medium.com/#patnaikshekhar/initialize-your-aks-nodes-with-daemonsets-679fa81fd20e
https://github.com/patnaikshekhar/AKSNodeInstaller
daemonset.yml
apiVersion: v1
kind: Namespace
metadata:
name: node-installer
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: installer
namespace: node-installer
spec:
selector:
matchLabels:
job: installer
template:
metadata:
labels:
job: installer
spec:
hostPID: true
restartPolicy: Always
containers:
- image: patnaikshekhar/node-installer:1.3
name: installer
securityContext:
privileged: true
volumeMounts:
- name: install-script
mountPath: /tmp
- name: host-mount
mountPath: /host
volumes:
- name: install-script
configMap:
name: sample-installer-config
- name: host-mount
hostPath:
path: /tmp/install
sampleconfigmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-installer-config
namespace: node-installer
data:
install.sh: |
#!/bin/bash
# install newrelic-infra
echo "license_key: #{NEW_RELIC_LICENSE_KEY}#" | sudo tee -a /etc/newrelic-infra.yml
echo "enabled: #{NEW_RELIC_INFRA_AGENT_ENABLED}#" | sudo tee -a /etc/newrelic-infra.yml
curl -s https://download.newrelic.com/infrastructure_agent/gpg/newrelic-infra.gpg | sudo apt-key add -
printf "deb https://download.newrelic.com/infrastructure_agent/linux/apt bionic main" | sudo tee -a /etc/apt/sources.list.d/newrelic-infra.list
sudo apt-get update -y
sudo apt-get install newrelic-infra -y
sudo systemctl status newrelic-infra
echo "Newrelic infra agent installation is done"
# enable log forwarding
echo "logs:" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
echo " - name: log-files-in-folder" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
echo " file: /var/log/onefc/*/*.newrelic.log" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
echo " max_line_kb: 256" | sudo tee -a /etc/newrelic-infra/logging.d/logs.yml
# trigger log forwarding
sudo newrelic-infra-ctl

Docker Rootless in Docker Rootless, It's possble?

For my job, I would like to run Jenkins and Docker Rootless (with the sysbox runtime only for this container), all in Docker Rootless.
I would like this because I need a secure environment given I don't inspect Jenkins pipelines
But when I run docker rootless in docker rootless, I get this error:
[rootlesskit:parent] error: failed to setup UID/GID map: newuidmap 54 [0 1000 1 1 100000 65536] failed: newuidmap: write to uid_map failed: Operation not permitted
: exit status 1
I tried many actions but failed to get it to work. Someone would have a solution to do this, please?
Thanks for reading me, have a nice day!
Edit 1
Hello, I take the liberty of relaunching this question, because being essential for the safety of our environment, my bosses remind me every day. Would someone have the answer to this problem please
Things getting a little tricky when you want to use the docker build command inside a Jenkins container.
I stumbled upon this issue when wanted to build docker images without being root, under the user 'jenkins' instead.
I wrote the solution in an article in which I explain in detail what is happening under the hood.
The key point is to figure out which GID the docker.sock socket is running under (depends on the system). So here is what you gotta do:
Run the command:
$ stat /var/run/docker.sock
Output:
jenkins#wsl-ubuntu:~$ stat /var/run/docker.sock
File: /var/run/docker.sock
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 17h/23d Inode: 552 Links: 1
Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1001/ docker)
Access: 2021-03-03 10:43:05.570000000 +0200
Modify: 2021-03-03 10:43:05.570000000 +0200
Change: 2021-03-03 10:43:05.570000000 +0200
Birth: -
In this case, the GID is 1001, but can also be 999 or something else in your machine.
Now, create a Dockerfile and paste the code below replacing the ENV variable with your own from the stat command output above:
FROM jenkins/jenkins:lts-alpine
USER root
ARG DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
ARG JAVA_OPTS=""
ENV DOCKER_HOST_GID $DOCKER_HOST_GID
ENV JAVA_OPTS $JAVA_OPTS
RUN set -eux \
&& apk --no-cache update \
&& apk --no-cache upgrade --available \
&& apk --no-cache add shadow \
&& apk --no-cache add docker curl --repository http://dl-cdn.alpinelinux.org/alpine/latest-stable/community \
&& deluser --remove-home jenkins \
&& addgroup -S jenkins -g $DOCKER_HOST_GID \
&& adduser -S -G jenkins -u $DOCKER_HOST_GID jenkins \
&& usermod -aG docker jenkins \
&& apk del shadow curl
USER jenkins
WORKDIR $JENKINS_HOME
For the sake of a working example, here is a docker-compose file:
version: '3.3'
services:
jenkins:
image: jenkins_master
container_name: jenkins_master
hostname: jenkins_master
restart: unless-stopped
env_file:
- jenkins.env
build:
context: .
cpus: 2
mem_limit: 1024m
mem_reservation: 800M
ports:
- 8090:8080
- 50010:50000
- 2375:2376
volumes:
- ./jenkins_data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- default
volumes:
jenkins_data: {}
networks:
default:
driver: bridge
Now lets create the ENV variables:
cat > jenkins.env <<EOF
DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
JAVA_OPTS=-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
EOF
and lastly, run the command docker-compose up -d.
It will build the image, and run it.
Then visit HTTP://host_machine_ip:8090 , and that's all.
If you run docker inspect --format '{{ index (index .Config.Env) }}' jenkins_master you will see that the 1st and 2nd variables are the ones we set.
More details can be found here: How to run rootless docker in dockerized Jenkins installation

Jenkins pipeline exception - Docker not found

I have created GKE cluster and install Jenkins on that cluster. Now i am running pipeline, i have Jenkinsfile created which is used to build DockerImage but when i am running the pipeline, it throws exception that Docker not found
1) Created GKE Cluster
2) Installed Jenkins
3) Added Docker hub credentials
4) Added access key for gitlab
Jenkinsfile:
stage('Build Docker Image') {
when {
branch 'master'
}
steps {
script {
echo 'Before docker run'
sh 'docker --version'
app = docker.build("sarab321/test-pipeline")
echo 'docker run successfully'
}
}
}
Please see the exception below
apiVersion: "v1"
kind: "Pod"
metadata:
annotations: {}
labels:
jenkins: "slave"
jenkins/cd-jenkins-slave: "true"
name: "default-d7qdb"
spec:
containers:
- args:
- "59c323186a77b4be015362977ec64e4838001b6d77c0f372bec7cda7cf93f9b2"
- "default-d7qdb"
env:
- name: "JENKINS_SECRET"
value: "59c323186a77b4be015362977ec64e4838001b6d77c0f372bec7cda7cf93f9b2"
- name: "JENKINS_TUNNEL"
value: "cd-jenkins-agent:50000"
- name: "JENKINS_AGENT_NAME"
value: "default-d7qdb"
- name: "JENKINS_NAME"
value: "default-d7qdb"
- name: "JENKINS_URL"
value: "http://cd-jenkins.default.svc.cluster.local:8080"
image: "jenkins/jnlp-slave:3.27-1"
imagePullPolicy: "IfNotPresent"
name: "jnlp"
resources:
limits:
memory: "512Mi"
cpu: "1"
requests:
memory: "256Mi"
cpu: "500m"
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/var/run/docker.sock"
name: "volume-0"
readOnly: false
- mountPath: "/home/jenkins"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins"
nodeSelector: {}
restartPolicy: "Never"
serviceAccount: "default"
volumes:
- hostPath:
path: "/var/run/docker.sock"
name: "volume-0"
- emptyDir:
medium: ""
name: "workspace-volume"
docker version
/home/jenkins/workspace/TestPipeline_master#tmp/durable-5dd73d2b/script.sh: 1: /home/jenkins/workspace/TestPipeline_master#tmp/durable-5dd73d2b/script.sh: docker: not found
Doesn't look like docker is installed on your build agent, that's inside the container using the "jenkins/jnlp-slave:3.27-1" image. I have examples of how I've installed the docker CLI in the jenkins LTS image at: https://github.com/sudo-bmitch/jenkins-docker
That image includes the following steps to make the docker integration portable:
installs the Docker CLI
installs gosu (needed since the entrypoint will start as root)
configures the jenkins user to be a member of the docker group
includes an entrypoint that adjusts to docker GID to match that of the /var/run/docker.sock GID
The actual docker CLI install is performed in the following lines:
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable" \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
docker-ce-cli${DOCKER_CLI_VERSION}
You can take the entrypoint.sh and Dockerfile, modify the base image (FROM) of the Dockerfile, and the original entrypoint script inside entrypoint.sh, to point to the jnlp-slave equivalents.

How to spin up a docker container (or docker-compose) with cloud-init (cloud-config)

I try to spin up a server which should run docker and docker-compose with a simple "hello-world" container. My YAML file looks like this:
#cloud-config
ssh_authorized_keys:
- ssh-rsa MY_SSH_KEY_HERE
package_update: true
package_upgrade: true
packages:
- docker.io
runcmd:
- [ sh, -c, "sudo apt install -y docker" ]
- [ sh, -c, "sudo apt install -y docker-compose"]
- [ sh, -c, "sudo service docker start" ]
rancher:
services:
rancher-server:
image: hello-world
restart: always
ports:
- 80:80
environment:
- TEST_VAR=TEST
Docker gets installed but wont start the image
root#test ~ # which docker
/usr/bin/docker
root#test ~ # which docker-compose
/usr/bin/docker-compose
> sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
BTW: is it necessary to include the packages: docker.io ?
In this answer, you can ignore adding default azure user to docker group if you are not using Azure VM. But keep in my mind to run docker you have to add your current user to docker group otherwise you may get permission denied error.
#cloud-config
package_update: true
# Setup swap memory
disk_setup:
ephemeral0:
table_type: mbr
layout: [66, [33, 82]]
overwrite: True
fs_setup:
- device: ephemeral0.1
filesystem: ext4
- device: ephemeral0.2
filesystem: swap
mounts:
- ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
# Enable Docker's swap limit support (System restart required)
bootcmd:
- [ sh, -c, 'sudo echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory swapaccount=1\" >> /etc/default/grub' ]
- [ sh, -c, 'sudo update-grub' ]
# Install latest stable docker and docker-compose
runcmd:
- [ sh, -c, 'curl -sSL https://get.docker.com/ | sh' ]
- [ sh, -c, 'sudo curl -L https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep "tag_name" | cut -d \" -f4)/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose' ]
- [ sh, -c, 'sudo chmod +x /usr/local/bin/docker-compose' ]
- [ sh, -c, 'sudo docker run -d nginx:latest' ]
# Add default azure user to docker group
system_info:
default_user:
groups: [docker]
# Restart the system
power_state:
delay: "now"
mode: reboot
message: First reboot
condition: True
This user-data string is working for me on DigitalOcean using ubuntu-18-04-x64 VM type. I expect it would work on any version of Ubuntu 18.04 built for a cloud virtual machine.
#cloud-config
# https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config-apt.txt
# https://docs.docker.com/install/linux/docker-ce/ubuntu/
apt:
sources:
download-docker-com.list:
source: "deb https://download.docker.com/linux/ubuntu $RELEASE stable"
key: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
# Search for package versions: $ apt-cache madison docker-ce
packages:
- docker-ce=5:19.03.5~3-0~ubuntu-bionic
- docker-compose=1.17.1-2
- containerd.io=1.2.10-3
users:
- name: user
uid: 1000
# Test Docker installation with $ docker run -u 1000 -t -i --rm hello-world

Resources