Is there a way to authenticate the host os users from docker container ?
Bind mapping the passwd , shadow and pam.d file make it work.
for example :
-v /etc/pam.d:/etc/pam.d
-v /etc/passwd:/etc/passwd
-v /etc/shadow:/etc/shadow
But is there any other feature / way in Docker which makes this possible without doing bind mapping?
Anyone who can run Docker commands is root, so it doesn't matter. As a corollary, if you need to make decisions based on the calling host user, you almost certainly don't want your tool packaged in a Docker image.
Put another way: if I can use docker run -v to bind-mount the host's /etc/shadow into a container for authentication purposes, then I can also docker run -u root -v /:/host ubuntu sh and make whatever changes I want to /host/etc/passwd, steal and decrypt the root password from /host/etc/shadow, add myself to /host/etc/sudoers, and so on.
Related
I am working on Docker and before i execute any command on Docker CLI , I need to switch to root used using the command
sudo su - root
Can anyone please tell me why we need to switch to root user to perform any operation on Docker Engine?
you don't need to switch to root for docker cli commands and it is common to add your user to the docker group
sudo groupadd docker
sudo usermod -aG docker $USER
see: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
the reason why docker is run as root:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
Using docker commands, you can trivially get root-level access to any part of the host filesystem. The very most basic example is
docker run --rm -v /:/host busybox cat /host/etc/shadow
which will get you a file of encrypted passwords that you can crack offline at your leisure; but if I wanted to actually take over the machine I'd just write my own line into /host/etc/passwd and /host/etc/shadow creating an alternate uid-0 user with no password and go to town.
Docker doesn't really have any way to limit what docker commands you can run or what files or volumes you can mount. So if you can run any docker command at all, you have unrestricted root access to the host. Putting it behind sudo is appropriate.
The other important corollary to this is that using the dockerd -H option to make the Docker socket network-accessible is asking for your system to get remotely rooted. Google "Docker cryptojacking" for some more details and prominent real-life examples.
I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.
The Problem:
Let's say you need to be able to create containers in your host from inside a container, Why?!!! Imagine you have your "continuous everything" process automated in a Jenkins Pipeline and this process includes creation of container or services for testing.
Even Though container and virtual machines enforces isolation from the host, this is a valid scenario.
The solution:
Sorry WinTel guys, did you expect this answer includes Windows?... Well just a clue, you can enable tcp://localhost:2375
Coming back to production grade answer, follow the next steps:
Spin up your instance binding "/var/run/docker.sock" from your host to your container:
docker container run --name container -v /var/run/docker.sock:/var/run/docker.sock image
docker.sock as any file exposes its user id and group id, any user having as group "docker" is allowed to "talk" with docker using the client, so run the following script:
#!/usr/bin/env bash
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
usermod -aG ${DOCKER_GROUP} youruser
fi
Don't freak out, this won't harm your system, basically, if the file (socket)
docker.sock exists (as it should), the script will get it group id, will create a group call
docker and will set the same group id as the docker's group one in the host
(confused?!?!, remember that we are inside the container we want to have access
to host docker, we executed "docker container exec -it -u root container bash"
in order to access the container), then, the user called "youruser" will be
modified by being added to "docker" group.
(Almost there!!!) Install docker client inside your container, use your
favorite package manager and install the docker client, I have the same version
of client and server and works like a charm but I suppose I could work with
other versions but come on!! mixing versions??? seriously???
After following these steps, you will be able to run docker commands using the common process, just remember that it is possible to do anything!!! even shooting you in the foot!!!
We have offshore developers who would like to run our server locally but for security reasons, we do not want to give them the server code. So a solution is that they run a Docker container, which is a self-contained version of our server! So no complicated setup on their side! :)
The problem is that it is always possible to access the Linux shell of the Docker instance as root, thus giving access to the source code.
How is it possible to disable the Docker container a root access? Or how can we isolate our source code from the root access?
You can modify your container creating a user (foo for example) and assigning to him the right permissions. Then you can run the docker container on docker run command using the arguments -u foo. If you run for example: docker run --rm -ti -u foo myCustomImage sh. This will open the sh shell with the $ instead of #. Of course on your Dockerfile you must create foo user before.
If you want more restrictions like for example to disable some kernel features, you have available since docker 1.10 the seccomp security feature. Check it out:
https://docs.docker.com/engine/security/seccomp/
Using this you can disable and restrict a lot of system features... and easy example to deny the mkdir command. Create a json file like this (name it as sec.json for example):
{
"defaultAction": "SCMP_ACT_ALLOW",
"syscalls": [
{
"name": "mkdir",
"action": "SCMP_ACT_ERRNO"
}
]
}
Then run your container doing: docker run --rm -ti --security-opt seccomp=/path/on/host/to/sec.json ubuntu:xenial sh. You can check inside the container you are not able to run mkdir command.
Hope this helps.