I want to create and edit docker containers automated using ansible and I found a connection plugin in the ansible GitHub repository, which uses docker exec instead of ssh to run commands etc. inside the container.
I can't find any documentation about this plugin and can't exactly figure out how to use it?
It's simple: set connection: docker and use container names as inventory hosts.
Example:
# docker run -d --name=mycontainer -e FOO=bar alpine:latest sleep 600
fde1a28914174c53e8f186f2b8ea312c0bda9c895fc6c956f3f1315788f0bf20
# ansible all -i 'mycontainer,' -c docker -m raw -a 'echo $FOO'
mycontainer | SUCCESS | rc=0 >>
bar
Just keep in mind, that most of Ansible modules require Python, but usually you have minimal amount of libraries inside your containers, and Python is not among them.
2020 TLDR: run a minimal Python container
In 2020, the above solution (running a minimal Alpine container) doesn't work --
Python is not installed.
Building on Konstantin Suvorov's answer, to make Ansible happy, give it a slim Python container:
docker run -d --name=mycontainer python:3.8-slim-buster sleep 600
Check:
ansible all -i 'mycontainer,' -c docker -m setup
Classic solution
The solution above no longer works, Python is not discoverable by Ansible:
docker run -d --name=bogus alpine:latest sleep 600
ansible all -i 'bogus,' -c docker -m setup
[WARNING]: No python interpreters found for host bogus (tried ['/usr/bin/python',
'python3.7', 'python3.6', 'python3.5', 'python2.7', 'python2.6',
'/usr/libexec/platform-python', '/usr/bin/python3', 'python'])
To make Ansible happy, give it a slim Python container:
docker run -d --name=mycontainer python:3.8-slim-buster sleep 600
Check:
ansible all -i 'mycontainer,' -c docker -m setup
Recommended Docker image
Itamar Turner-Trauring[1] recommended base Python image = python:3.8-slim-buster.
The Alpine image, although nice and tiny, causes a lots of problems with Python!
The above image is Debian-based, small enough, and totally solid.
[1] from https://pythonspeed.com/articles/base-image-python-docker-images/
Related
I'm on the part of the tutorial where it talks about data persistence.
First, I run this command to put a random number into a text file within an ubuntu image:
docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null"
I think I understand this line pretty well.
Next, the instructions ask me to start a new container (the same image) and I will see that the file is not the same:
docker run -it ubuntu ls /
However, when I run the above command, I get the following error:
/ ls: cannot access 'C:/Program Files/Git/': No such file or directory
I'm running Windows 10 using Git Bash, and this is being done through VS Code.
For now, I've gotten around this issue by re-running the exact command (docker run -d ubuntu bash -c "shuf -i 1-10000 -n 1 -o /data.txt && tail -f /dev/null"), but I would like to know why the docker run -it ubuntu ls / instructions failed, and what the solution is?
I managed to solve the issue so I am posting the solution here in case people come across the same issue in the future: git bash changes absolute paths so it is something that should be disabled.
Put this into .bashrc to correct the way paths are handled:
# Workaround for Docker for Windows in Git Bash.
docker()
{
(export MSYS_NO_PATHCONV=1; "docker.exe" "$#")
}
Unfortunately, this doesn't work in scenarios where docker run is called from npm scripts, etc. Volume mapping will still break.
See here to continue exploring the issue and seeing possible workarounds
Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount
I have tried to install Docker on google Colab through the following ways:
(1)https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
(2)https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
(3)https://colab.research.google.com/drive/10OinT5ZNGtdLLQ9K399jlKgNgidxUbGP
I started the docker service and saw the status, but it showed 'Docker is not running'. Maybe the docker can not work on the Colab.
I feel confused and want to know the reason.
Thanks
It's possible to run Docker in Colab, but with limiting functionality.
There are two methods of running Docker service, a regular one (more restrictive), and in rootless mode (dockerd inside RootlessKit).
dockerd
Install by:
!apt-get -qq install docker.io
Use the following shell script:
%%shell
set -x
dockerd -b none --iptables=0 -l warn &
for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
docker info
docker network ls
docker pull hello-world
docker pull ubuntu
# docker build -t myimage .
docker images
kill $(jobs -p)
As shown above, before each docker command, you've to run Docker service (dockerd) in the background, then kill it. Unfortunately you've to run dockerd for each cell where you want to run your docker commands.
Notes on dockerd arguments:
-b none/--bridge none - Disables a network bridge to avoid errors.
--iptables=0 - Disables addition of iptables rules to avoid errors.
-D - Add to enable debug mode.
However in this mode running most of the containers will generate the errors related to read-only file system.
Additional notes:
To disable cpuset support, run: !umount -vl /sys/fs/cgroup/cpuset.
Related issue: https://github.com/docker/for-linux/issues/1124.
Here are some notepads:
https://colab.research.google.com/drive/1Lmbkc7v7XjSWK64E3NY1cw7iJ0sF1brl
https://colab.research.google.com/drive/1RVS5EngPybRZ45PQRmz56PPdz9nWStIb (without cpuset support)
Rootless dockerd
Rootless mode allows running the Docker daemon and containers as a non-root user.
To install, use the following code:
%%shell
useradd -md /opt/docker docker
apt-get -qq install iproute2 uidmap
sudo -Hu docker SKIP_IPTABLES=1 bash < <(curl -fsSL https://get.docker.com/rootless)
To run dockerd service, there are two methods: using a script (dockerd-rootless.sh) or running rootlesskit directly.
Here is the script which uses dockerd-rootless.sh to run a hello-world container:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
/opt/docker/bin/dockerd-rootless.sh --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker run $#
jobs -p
kill $(jobs -p)
To run above script, run:
!sudo -Hu docker bash -x docker-run.sh hello-world
The above may generate the following warnings:
WARN[0000] failed to mount sysfs, falling back to read-only mount: operation not permitted
To remount some folders with write access, you can try:
!mount -vt sysfs sysfs /sys -o rw,remount
!mount -vt tmpfs tmpfs /sys/fs/cgroup -o rw,remount
[rootlesskit:child ] error: executing [[ip tuntap add name tap0 mode tap] [ip link set tap0 address 02:50:00:00:00:01]]: exit status 1
The above error is related to dockerd-rootless.sh script which adds extra network parameters to rootlesskit such as:
--net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin
This has been reported at https://github.com/rootless-containers/rootlesskit/issues/181 (however ignored).
To workaround the above problem, we can pass our own arguments to rootlesskit using the following file instead:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
rootlesskit --debug --disable-host-loopback --copy-up=/etc --copy-up=/run /opt/docker/bin/dockerd -b none --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker $#
jobs -p
kill $(jobs -p)
Then run as:
!sudo -Hu docker bash docker-run.sh run --cap-add SYS_ADMIN hello-world
Depending on your image, this may generate the following error:
process_linux.go:449: container init caused "join session keyring: create session key: operation not permitted": unknown.
Which could be solved by !sysctl -w kernel.keys.maxkeys=500, however Colab doesn't allow it. Related: Error response from daemon: join session keyring: create session key: disk quota exceeded.
Notepad showing the above:
https://colab.research.google.com/drive/1oRja4v-PtY6lFMJIIF79No4s3s-vbqd4
Suggested further reading:
Finding the minimal set of privileges for a docker container.
I had the same issue as you and apparently Docker is not supported in Google Colab according to the answers on this issue from its Github repository: https://github.com/googlecolab/colabtools/issues/299#issuecomment-615308778.
I know, it is an old question, but this an old answer (2020) by a member of the Google Colaboratory team.
this isn't possible, and we currently have no plans to support this.
The virtualization/isolation provided by docker is available in Colab as each Colab session is an isolation by itself, if one installs the required libraries, hardware abstraction (Colab by default offers a free GPU and one can choose it during run time).....Have used conda and when I switched to dockers, there was a distinct difference in performance......Docker never had GPU memory fragmentation, but using conda (bare-metal) had the same......I have been trying single colab sessions for training in TF2 and soon will have testing and monitoring sessions(using Tensorboard) and can fully understand, whether having docker in Colab is good or not......Will come back and post my feed back soon....
Here's the situation:
I have a docker container (jenkins). I've mounted the sockets to my container so that I can perform docker commands inside my jenkins container.
Manually, everything works in the container. However, when Jenkins executes the job, it doesn't "wait" for the docker exec command to run to completion.
Below, is an extract from the Jenkinsfile. The short-lived printenv command runs correctly, and prints the environment variables. The next command (python) just gets run and then Jenkins moves on immediately, not waiting for completion. The Jenkins agent (slave) is running on an Ubuntu image. Running all these commands outside Jenkins work as expected.
echo "Running the app docker container in detached tty mode to keep it up"
docker run --detach --tty --name "${CONTAINER_NAME}" "${IMAGE_NAME}"
echo "Listing environment variables"
docker exec --interactive "${CONTAINER_NAME}" bash -c "printenv"
echo "Running test coverage"
docker exec --interactive "${CONTAINER_NAME}" bash -c "python -m coverage run --source . --branch -m pytest -vs"
It seems maybe related to this question.
Please can anyone explain how to get Jenkins to wait for the docker exec command to complete before proceeding to the next step.
Have considered alternatives, like the Docker Pipeline Plugin, but would much prefer to use something close to what I have above where possible.
Ok, another approach, I've tried using Docker Pipeline plugin here.
You can use docker.sock as volume mount to orchestrate containers on your host machine like this in your docker-compose.yml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Depending on your setup you might need to run
chmod 666 /var/run/docker.sock
to get going in the first place.
This works on macOS as well as Linux.
Ugh. This was down to the way that I'd set up docker support on the slave container.
I'd used socat to provide a TCP server proxy. Instead, switched that out for a plain old docker.sock volume between host & container.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The very first time, I had to also sort out a permissions issue by doing (inside the container):
rm -Rf ~/.docker
chmod 666 /var/run/docker.sock
After that, everything "just worked". Very painful experience.
I recently found out about Podman (https://podman.io). Having a way to use Linux fork processes instead of a Daemon and not having to run using root just got my attention.
But I'm very used to orchestrate the containers running on my machine (in production we use kubernetes) using docker-compose. And I truly like it.
So I'm trying to replace docker-compose. I will try to keep docker-compose and using podman as an alias to docker as Podman uses the same syntax as docker:
alias docker=podman
Will it work? Can you suggest any other tool? I really intend to keep my docker-compose.yml file, if possible.
Yes, that is doable now, check podman-compose, this is one way of doing it, another way is to convert the docker-compose yaml file to a kubernetes deployment using Kompose. there is a blog post from Jérôme Petazzoni #jpetazzo: from docker-compose to kubernetes deployment
Update 6 May 2022 : Podman now supports Docker Compose v2.2 and higher (see Podman 4.1.0 release notes)
Old answer:
Running docker-compose with Podman as a normal user (rootless)
Requirement: Podman version >= 3.2.1 (released in June 2021)
Install the executable docker-compose
curl -sL -o ~/docker-compose https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)
chmod 755 ~/docker-compose
Alternatively you could also run docker-compose in a container image (see below).
Run
systemctl --user start podman.socket
Set the environment variable DOCKER_HOST
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
Run
~/docker-compose up -d
Running docker-compose with Podman as root
Requirement: Podman version >= 3.0 (released in February 2021)
Follow the same procedure but remove the flag --user
systemctl start podman.socket
Running docker-compose in a container image
Use the container image docker.io/docker/compose to run
docker-compose
podman \
run \
--rm \
--detach \
--env DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock \
--security-opt label=disable \
--volume $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock \
--volume $(pwd):$(pwd) \
--workdir $(pwd) \
docker.io/docker/compose \
--verbose \
up -d
(the flag --verbose is optional)
The same command with short command-line options on a single line:
podman run --rm -d -e DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock --security-opt label=disable -v $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock -v $(pwd):$(pwd) -w $(pwd) docker.io/docker/compose --verbose up -d
Regarding SELINUX: Runnng Podman with SELINUX is preferable from a security point-of-view, but I didn't get it to work on a Fedora 34 computer so I disabled SELINUX by adding the command-line option
--security-opt label=disable
Troubleshooting tips
Test the Docker REST API
A minimal check to see that the Docker REST API is working:
$ curl -H "Content-Type: application/json" \
--unix-socket $XDG_RUNTIME_DIR/podman/podman.sock \
http://localhost/_ping
OK$
Avoid short container image names
If any of your docker-compose.yaml or Dockerfile files contain a short container image name, for instance
$ grep image: docker-compose.yaml
image: mysql:8.0.19
$
$ grep FROM Dockerfile
FROM python:3.9
$
edit the files to use the whole container image name instead
$ grep image: docker-compose.yaml
image: docker.io/library/mysql:8.0.19
$
$ grep FROM Dockerfile
FROM docker.io/library/python:3.9
$
Most often short names have been used to reference DockerHub Official Images
(a catalogue) so a good guess would be to prepend the container image name with docker.io/library/
There are currently many different container image registries, not just DockerHub (docker.io). Writing the whole container image name is thus a good practice. Podman might complain otherwise depending on how Podman is configured.
Rootless users can't bind to ports below 1024
If for instance
$ grep -A1 ports: docker-compose.yml
ports:
- 80:80
$
edit docker-compose.yaml so that the host port number >= 1024, for instance 8080
$ grep -A1 ports: docker-compose.yml
ports:
- 8080:80
$
An alternative solution is to adjust net.ipv4.ip_unprivileged_port_start with sysctl (see Shortcomings of Rootless Podman)
In case Systemd is missing
Most Linux distributions use Systemd where you would preferably start the Podman service (providing the REST API) by "starting" the Podman socket
systemctl --user start podman.socket
or
systemctl start podman.socket
but in case Systemd is missing you could also start the Podman service directly
podman system service --time 0 unix:/some/path/podman.sock
Systemd gives the extra benefit that the Podman service is started on demand with Systemd socket activation and stops after some time of inactivity.
Caveat: Swarm functionality is missing
A difference to Docker is that the functionality relating to Swarm is not supported when using docker-compose with Podman.
References:
https://www.redhat.com/sysadmin/podman-docker-compose
https://github.com/containers/podman/discussions/10644#discussioncomment-857897
Ensure Podman is installed on your machine.
You can install Podman Compose in a terminal with the following command:
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
cd into the directory your docker-compose file is located in
Run podman-compose up
See the following link for a decent introduction.