Ansible Docker Connection Error - docker

I am running ansible version 1.9, docker-py version 1.1.0 and Docker 1.9.1. I have a private insecured docker registry running at http://registry.myserver.com:5000.
I have an ansible task to start a container using a pulled image from this remote registry:
---
- name: Start User Service Container
docker:
name: userService
image: user-service
registry: registry.myserver.com:5000
state: running
insecure_registry: yes
expose:
- 8355
However, this is currently returning the following error:
failed: [bniapp1] => {"changed": false, "failed": true}
msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
Verbose output:
<54.229.16.155>
<54.229.16.155> image=discovery-service registry=http://registry.myserver.com:5000 name=discoveryService state=running
<54.229.16.155> IdentityFile=/home/nfrstrctrescd/bni-api.pem ConnectTimeout=10 PasswordAuthentication=no KbdInteractiveAuthentication=no User=centos ControlPath =/home/nfrstrctrescd/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=g ssapi-with-mic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersis t=60s
<54.229.16.155>
<54.229.16.155> IdentityFile=/home/nfrstrctrescd/bni-api.pem ConnectTimeout=10 'sudo -k && sudo -H -S -p "[sudo via ansible, key=hxhptjipltjnteknbbxkqgcdwvwshen p] password: " -u root /bin/sh -c '"'"'echo SUDO-SUCCESS-hxhptjipltjnteknbbxkqgc dwvwshenp; LANG=C DOCKER_HOST=tcp://127.0.0.1:2376 DOCKER_TLS_VERIFY=1 LC_CTYPE= C DOCKER_CERT_PATH=/opt/docker/certs /usr/bin/python /home/centos/.ansible/tmp/a nsible-tmp-1460499148.45-268540710837667/docker; rm -rf /home/centos/.ansible/tm p/ansible-tmp-1460499148.45-268540710837667/ >/dev/null 2>&1'"'"'' PasswordAuthe ntication=no KbdInteractiveAuthentication=no User=centos ControlPath=/home/nfrst rctrescd/.ansible/cp/ansible-ssh-%h-%p-%r PreferredAuthentications=gssapi-with-m ic,gssapi-keyex,hostbased,publickey ControlMaster=auto ControlPersist=60s
failed: [bniapp1] => {"changed": false, "failed": true}
msg: ConnectionError(ProtocolError('Connection aborted.', error(2, 'No such file or directory')),)
Note: When i run the container manually on the remote server, the image gets pulled and the container is started correctly:
docker run registry.myserver.com:5000/user-service

I got this error because my docker daemon was not running. Adding the following ansible code before starting docker fixed it for me:
# Start Docker Service
- name: Start Docker service
service: name=docker state=started
become: yes
become_method: sudo
- name: Boot Docker on startup
service: name=docker enabled=yes
become: yes
become_method: sudo

Related

Why local Gitlab runner isn't detecting running Docker instance?

I've just installed Gitlab-runner locally on my Ubuntu machine so I can debug my pipeline without using shared runners.
I'm getting this error output:
$ docker-compose up -d --build
Couldn't connect to Docker daemon at http://docker:2375 - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
when I run docker --version I get:
Docker version 20.10.12, build e91ed57
and when I run sudo systemctl status docker I get:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-01-01 20:26:25 GMT; 37min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 1404 (dockerd)
Tasks: 20
Memory: 112.0M
CGroup: /system.slice/docker.service
└─1404 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
so it is installed and running hence the error output is confusing.
Here's my pipeline:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
job:
stage: build
script:
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev docker-compose
- docker-compose up -d --build
- docker logs testdriven_e2e:latest -f
after_script:
- docker-compose down
I start the running executing gitlab-runner exec docker --docker-privileged job
Any suggestion as to why the runner is complaining about Docker not running ?
update: based on opinions from this thread https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1986
image: docker:stable
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
before_script:
- docker info
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
job:
stage: build
script:
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev curl
- curl -L "https://github.com/docker/compose/releases/download/v2.2.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose up -d --build
- docker logs testdriven_e2e:latest -f
after_script:
- docker-compose down
config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "testdriven"
url = "https://gitlab.com/"
token = "yU2yn4eUmFJ-xr3HzzmE"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
insecure = false
[runners.docker]
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
cache_dir = "cache"
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
shm_size = 0
error output:
$ docker info
Client:
Debug Mode: false
Server:
ERROR: Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
errors pretty printing info
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
Strangely, what worked for me was pinning down the dind version like this:
services:
- docker:18.09-dind
Which port is being used by docker in your system? It seems that it's running in a non-default port. Try adding this to your .gitlab-ci.yaml file, but change the 2375 port.
variables:
DOCKER_HOST: "tcp://docker:2375"

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

Not able to run ansible-playbook using becomeUser

I'm trying to run ansible-playbook from Jenkinsfile with become and becomeUser parameters but it seems Jenkins is taking its own userid "jenkins" to connect to remote host
Jenkinsfile
stage("Deployment"){
steps{
ansiColor('xterm') {
ansiblePlaybook(
playbook: 'myPlaybook.yaml',
inventory: 'myHosts.ini',
colorized: true,
become: true,
becomeUser: 'userID',
extras: '-vvv'
)
}
}
}
I also appended become and becomeUser in playbook as well
---
- name: Deploy stack from a compose file
hosts: myNodes
become: yes
become_user: userID
tasks:
- name: deploying my application
docker_stack:
state: present
Jenkins build log
TASK [Gathering Facts] *********************************************************
task path: /path/to/myPlaybook.yaml:2
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: None
<x.x.x.x> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/var/lib/jenkins/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/var/lib/jenkins/.ansible/cp/5493f46899 x.x.x.x '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<x.x.x.x> (255, '', 'jenkins#x.x.x.x: Permission denied (publickey,password).\r\n')
fatal: [x.x.x.x]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: jenkins#x.x.x.x: Permission denied (publickey,password).",
"unreachable": true
}
Even jenkins run with become and becomeUser command
[xx-yy] $ ansible-playbook myplaybook.yaml -i myHosts.ini -b --become-user userID -vvv
Please advise to resolve this, Thanks.
Found the alternate solution. Observed the logs line by line:
ESTABLISH SSH CONNECTION FOR USER: None
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: None
<x.x.x.x> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/var/lib/jenkins/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/var/lib/jenkins/.ansible/cp/5493f46899 x.x.x.x '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<x.x.x.x> (255, '', 'jenkins#x.x.x.x: Permission denied (publickey,password).\r\n')
Hence added ansible_user while doing ssh to remote user in inventory file:
[myNode]
x.x.x.x ansible_user=myuserId
Happy Learning
The below link could be helpful for you in understanding about become and become_user.
Medium Blog Link here .
And this is the snippet worth sharing,
# as bruce
$ ansible all -m ping -u bruce
# as bruce, sudoing to root
$ ansible all -m ping -u bruce --sudo
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --sudo --sudo-user batman
# With latest version of ansible `sudo` is deprecated so use become
# as bruce, sudoing to root
$ ansible all -m ping -u bruce -b
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce -b --become-user batman

IBM Cloud Private Docker logged in as root rather than ubuntu

When I run the docker command on the ICP tutorial:
docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
I receive a error that I am logged in as root instead of the ubuntu user. What may be causing this and how can it be fixed?
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
changed: [10.2.7.26]
PLAY [Checking prerequisites] **************************************************
TASK [Gathering Facts] *********************************************************
[WARNING]: sftp transfer mechanism failed on [10.2.7.26]. Use ANSIBLE_DEBUG=1
to see detailed information
[WARNING]: scp transfer mechanism failed on [10.2.7.26]. Use ANSIBLE_DEBUG=1
to see detailed information
fatal: [10.2.7.26]: FAILED! => {"changed": false, "module_stderr": "Connection to 10.2.7.26 closed.\r\n", "module_stdout": "Please login as the user \"ubuntu\" rather than the user \"root\".\r\n\r\n", "msg": "MODULE FAILURE", "rc": 0}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
10.2.7.26 : ok=1 changed=1 unreachable=0 failed=1
Edit:
The error from the verbose message:
<10.2.7.26> ESTABLISH SSH CONNECTION FOR USER: root
<10.2.7.26> SSH: EXEC ssh -C -o CheckHostIP=no -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/installer/cluster/ssh_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=60 10.2.7.26 'dd of=Please login as the user "ubuntu" rather than the user "root"./setup.py bs=65536'
<10.2.7.26> (0, 'Please login as the user "ubuntu" rather than the user "root".\n\n', '')
However, this error occurs when I use my private key generated from my cloud provider. When I follow the SSH key generator here: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/installing/ssh_keys.html
I get this error:
<10.2.7.26> ESTABLISH SSH CONNECTION FOR USER: root
<10.2.7.26> SSH: EXEC ssh -C -o CheckHostIP=no -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/installer/cluster/ssh_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=60 -tt 10.2.7.26 'ls /usr/bin/python &>/dev/null || (echo "Can'"'"'t find Python interpreter(/usr/bin/python) on your node" && exit 1)'
<10.2.7.26> (255, '', 'Permission denied (publickey).\r\n')
fatal: [10.2.7.26]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied >(publickey).\r\n",
"unreachable": true
}
The hosts:
[master]
10.2.7.26
[worker]
10.2.7.26
[proxy]
10.2.7.26
The Config.yaml:
network_type: calico
kubelet_extra_args: ["--fail-swap-on=false"]
cluster_domain: cluster.local
etcd_extra_args: ["--grpc-keepalive-timeout=0", "--grpc-keepalive-interval=0",
"--snapshot-count=10000"]
default_admin_user: admin
default_admin_password: admin
disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
image-security-enforcement:
clusterImagePolicy:
- name: "docker.io/ibmcom/*"
policy:
For ICP installation, it requires root user permission. Could you try to install ICP by below command?
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
More information, you can access below link for details.
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/installing/install_containers_CE.html

How to spin up a docker container (or docker-compose) with cloud-init (cloud-config)

I try to spin up a server which should run docker and docker-compose with a simple "hello-world" container. My YAML file looks like this:
#cloud-config
ssh_authorized_keys:
- ssh-rsa MY_SSH_KEY_HERE
package_update: true
package_upgrade: true
packages:
- docker.io
runcmd:
- [ sh, -c, "sudo apt install -y docker" ]
- [ sh, -c, "sudo apt install -y docker-compose"]
- [ sh, -c, "sudo service docker start" ]
rancher:
services:
rancher-server:
image: hello-world
restart: always
ports:
- 80:80
environment:
- TEST_VAR=TEST
Docker gets installed but wont start the image
root#test ~ # which docker
/usr/bin/docker
root#test ~ # which docker-compose
/usr/bin/docker-compose
> sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
BTW: is it necessary to include the packages: docker.io ?
In this answer, you can ignore adding default azure user to docker group if you are not using Azure VM. But keep in my mind to run docker you have to add your current user to docker group otherwise you may get permission denied error.
#cloud-config
package_update: true
# Setup swap memory
disk_setup:
ephemeral0:
table_type: mbr
layout: [66, [33, 82]]
overwrite: True
fs_setup:
- device: ephemeral0.1
filesystem: ext4
- device: ephemeral0.2
filesystem: swap
mounts:
- ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
# Enable Docker's swap limit support (System restart required)
bootcmd:
- [ sh, -c, 'sudo echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory swapaccount=1\" >> /etc/default/grub' ]
- [ sh, -c, 'sudo update-grub' ]
# Install latest stable docker and docker-compose
runcmd:
- [ sh, -c, 'curl -sSL https://get.docker.com/ | sh' ]
- [ sh, -c, 'sudo curl -L https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep "tag_name" | cut -d \" -f4)/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose' ]
- [ sh, -c, 'sudo chmod +x /usr/local/bin/docker-compose' ]
- [ sh, -c, 'sudo docker run -d nginx:latest' ]
# Add default azure user to docker group
system_info:
default_user:
groups: [docker]
# Restart the system
power_state:
delay: "now"
mode: reboot
message: First reboot
condition: True
This user-data string is working for me on DigitalOcean using ubuntu-18-04-x64 VM type. I expect it would work on any version of Ubuntu 18.04 built for a cloud virtual machine.
#cloud-config
# https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config-apt.txt
# https://docs.docker.com/install/linux/docker-ce/ubuntu/
apt:
sources:
download-docker-com.list:
source: "deb https://download.docker.com/linux/ubuntu $RELEASE stable"
key: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
# Search for package versions: $ apt-cache madison docker-ce
packages:
- docker-ce=5:19.03.5~3-0~ubuntu-bionic
- docker-compose=1.17.1-2
- containerd.io=1.2.10-3
users:
- name: user
uid: 1000
# Test Docker installation with $ docker run -u 1000 -t -i --rm hello-world

Resources