The Problem:
Let's say you need to be able to create containers in your host from inside a container, Why?!!! Imagine you have your "continuous everything" process automated in a Jenkins Pipeline and this process includes creation of container or services for testing.
Even Though container and virtual machines enforces isolation from the host, this is a valid scenario.
The solution:
Sorry WinTel guys, did you expect this answer includes Windows?... Well just a clue, you can enable tcp://localhost:2375
Coming back to production grade answer, follow the next steps:
Spin up your instance binding "/var/run/docker.sock" from your host to your container:
docker container run --name container -v /var/run/docker.sock:/var/run/docker.sock image
docker.sock as any file exposes its user id and group id, any user having as group "docker" is allowed to "talk" with docker using the client, so run the following script:
#!/usr/bin/env bash
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
usermod -aG ${DOCKER_GROUP} youruser
fi
Don't freak out, this won't harm your system, basically, if the file (socket)
docker.sock exists (as it should), the script will get it group id, will create a group call
docker and will set the same group id as the docker's group one in the host
(confused?!?!, remember that we are inside the container we want to have access
to host docker, we executed "docker container exec -it -u root container bash"
in order to access the container), then, the user called "youruser" will be
modified by being added to "docker" group.
(Almost there!!!) Install docker client inside your container, use your
favorite package manager and install the docker client, I have the same version
of client and server and works like a charm but I suppose I could work with
other versions but come on!! mixing versions??? seriously???
After following these steps, you will be able to run docker commands using the common process, just remember that it is possible to do anything!!! even shooting you in the foot!!!
Related
I am working on Docker and before i execute any command on Docker CLI , I need to switch to root used using the command
sudo su - root
Can anyone please tell me why we need to switch to root user to perform any operation on Docker Engine?
you don't need to switch to root for docker cli commands and it is common to add your user to the docker group
sudo groupadd docker
sudo usermod -aG docker $USER
see: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
the reason why docker is run as root:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
Using docker commands, you can trivially get root-level access to any part of the host filesystem. The very most basic example is
docker run --rm -v /:/host busybox cat /host/etc/shadow
which will get you a file of encrypted passwords that you can crack offline at your leisure; but if I wanted to actually take over the machine I'd just write my own line into /host/etc/passwd and /host/etc/shadow creating an alternate uid-0 user with no password and go to town.
Docker doesn't really have any way to limit what docker commands you can run or what files or volumes you can mount. So if you can run any docker command at all, you have unrestricted root access to the host. Putting it behind sudo is appropriate.
The other important corollary to this is that using the dockerd -H option to make the Docker socket network-accessible is asking for your system to get remotely rooted. Google "Docker cryptojacking" for some more details and prominent real-life examples.
I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Alice and Bob are both members of the docker group on the same host. Alice wants to run some long-running calculations in a docker container, then copy the results to her home folder. Bob is very nosy, and Alice doesn't want him to be able to read the data that her calculation is using.
Is there anything that the system administrator can do to keep Bob out of Alice's docker containers?
Here's how I think Alice should get data in and out of her container, based on named volumes and the docker cp command, as described in this question and this one.
$ pwd
/home/alice
$ date > input1.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -v sandbox1:/data alpine echo OK
OK
$ docker cp input1.txt run1:/data/input1.txt
$ docker run --rm -v sandbox1:/data alpine sh -c "cp /data/input1.txt /data/output1.txt && date >> /data/output1.txt"
$ docker cp run1:/data/output1.txt output1.txt
$ cat output1.txt
Thu Oct 5 16:35:30 PDT 2017
Thu Oct 5 23:36:32 UTC 2017
$ docker container rm run1
run1
$ docker volume rm sandbox1
sandbox1
$
I create an input file, input1.txt and a named volume, sandbox1. Then I start a container named run1 just so I can copy files into the named volume. That container just prints an "OK" message and quits. I copy the input file, then run the main calculation. In this example, it copies the input to the output and adds a second timestamp to it.
After the calculation finishes, I copy the output file, then remove the container and the named volume.
Is there any way to stop Bob from loading his own container that mounts the named volume and shows him Alice's data? I've set up Docker to use a user namespace, so Alice and Bob don't have root access to the host, but I can't see how to make Alice and Bob use different user namespaces.
Alice and Bob have been granted virtual root access to the host by being in the docker group.
The docker group grants them access to the Docker API via a socket file. There is no facility in Docker at the moment to differentiate between users of the Docker API. The Docker daemon runs as root and by virtue of what the Docker API allows, Alice and Bob will be able to work around any barriers that you did try to put in place.
User Namespaces
The use of the user namespace isolation stops users inside a container breaking out of a container as a privileged or different user, so in effect the container process is now running as an unprivileged user.
An example would be
Alice is given ssh access to container A running in namespace_a.
Bob is given ssh access to container B in namespace_b.
Because the users are now only inside the container, they won't be able to modify each others files on the host. Say if both containers mapped the same host volume, files without world read/write/execute will be safe from each others containers. As they have no control over the daemon, they can't do anything to break out.
Docker Daemon
The namespace doesn't secure the Docker daemon and API itself, which is still a privileged process. The first way around a user name space is setting the host namespace on the command line:
docker run --privileged --userns=host busybox fdisk -l
The docker exec, docker cp and docker export commands will give someone with access to the Docker API the contents of any created containers.
Restricting Docker Access
It is possible to restrict access to the API but you can't have users with shell access in the docker group.
Allowing a limited set of docker commands via sudo or providing sudo access to scripts that hard code the docker parameters:
#!/bin/sh
docker run --userns=whom image command
For automated systems, access can be provided via an additional shim API with appropriate access controls in front of the Docker API that then passes on the "controlled" request to Docker. dockerode or docker-py can be easily plugged into a REST service and interface with Docker.
I am launch a jenkins docker container for CI work. And the host OS I am using is CoreOS. Inside the jenkins container, I also installed docker-cli in order to run build on docker containers in the host system. In order to do that, I use below configuration to mount /var/run on the jenkins container for mapper Docker socket:
volumes:
- /jenkins/data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock:rw
when I launch the container and run docker command, I got below error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.29/containers/json: dial unix /var/run/docker.sock: connect: permission denied
The /var/run is root permission but my user is jenkins. How can I solve the permission issue to allow jenkins user to use docker command through mapper socket?
I have tried below command but the container doesn't allow me to run sudo:
$ sudo usermod -a -G docker jenkins
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
sudo: no tty present and no askpass program specified
There's nothing magical about permissions in Docker: they work just like permissions outside of Docker. That is, if you want a user to have access to a file (like /var/run/docker.sock), then either that file needs to be owned by the user, or they need to be a member of the appropriate group, or the permissions on the file need to permit access to anybody.
Exposing /var/run/docker.sock to a non-root user is a little tricky, because typical solutions (just chown/chmod things from inside the container) will potentially break things on your host.
I suspect the best solution may be:
Ensure that /var/run/docker.sock on your host is group-writable (e.g., create a docker group on your host and make sure that users in that group can use Docker).
Pass the numeric group id of your docker group into the container as an environment variable.
Have an ENTRYPOINT script in your container that runs as root that (a) creates a group with a matching numeric gid, and (b) modifies the Jenkins users to be a member of that group, and then (c) exec your docker CMD as the jenkins user.
So, your entrypoint script might look something like this (assuming that you have passed in a value for $DOCKER_GROUP_ID in your docker-compose.yml):
#!/bin/sh
groupadd -g $DOCKER_GROUP_ID docker
usermod -a -G docker jenkins
exec runuser -u jenkins "$#"
You would need to copy this into your image and add the appropriate ENTRYPOINT directive to your Dockerfile.
You may not have the runuser command. You can accomplish something similar using sudo or su or other similar commands.
We have offshore developers who would like to run our server locally but for security reasons, we do not want to give them the server code. So a solution is that they run a Docker container, which is a self-contained version of our server! So no complicated setup on their side! :)
The problem is that it is always possible to access the Linux shell of the Docker instance as root, thus giving access to the source code.
How is it possible to disable the Docker container a root access? Or how can we isolate our source code from the root access?
You can modify your container creating a user (foo for example) and assigning to him the right permissions. Then you can run the docker container on docker run command using the arguments -u foo. If you run for example: docker run --rm -ti -u foo myCustomImage sh. This will open the sh shell with the $ instead of #. Of course on your Dockerfile you must create foo user before.
If you want more restrictions like for example to disable some kernel features, you have available since docker 1.10 the seccomp security feature. Check it out:
https://docs.docker.com/engine/security/seccomp/
Using this you can disable and restrict a lot of system features... and easy example to deny the mkdir command. Create a json file like this (name it as sec.json for example):
{
"defaultAction": "SCMP_ACT_ALLOW",
"syscalls": [
{
"name": "mkdir",
"action": "SCMP_ACT_ERRNO"
}
]
}
Then run your container doing: docker run --rm -ti --security-opt seccomp=/path/on/host/to/sec.json ubuntu:xenial sh. You can check inside the container you are not able to run mkdir command.
Hope this helps.