I have created a docker container from ubuntu image. Other users can attach to this container by docker exec -it CONTAINER_ID bash. Is there a way to add username and password for this command? I don't want my container to be accessed by other users. I want when users execute docker exec command to attach to my container, it prompts to ask a username and password. Users can only attach to it after input a correct username and password. Just like what ssh does.
Access to the docker socket (which is used by the docker command line), should be treated as sysadmin level access to the host and all containers being run on that host.
You can configure the docker daemon to listen on a port with TLS credentials and validation of client certificates. However, once a user has access to any docker API calls, they would have access to them all, and without any login prompts.
You could try a third party plugin provided by Twistlock that implements the authz plugin for docker. This will let you limit access to the exec call to specific TLS client certificates. However it will not limit which containers they can exec into.
Probably the closest to what you want comes with Docker's EE offering, specifically UCP. It's a commercial tool, but they provide a different API entrypoint that performs its own authentication, including the option for a user/password with web based requests, and RBAC security that lets you limit access to calls like exec to specific users and specific collections of containers.
If you wanted to do this from the container side, I'm afraid that won't work. Exec is run as a Linux exec syscall directly inside the container namespace, so there's nothing inside the container you could do to prevent that sort of access. The best option is to remove any commands from your image that you don't want anyone to be able to run in the container.
Related
I'm familiar with how to create, get, delete, etc secrets in a Vault server running on dev mode (by this I mean all the command line prompts and commands that are used from creating/starting the server, setting the vault address and root token, and then actually working with secrets).
How exactly would I do this with a Vault container? Using the same steps for a Vault server doesn't work, so I'm guessing that I'm missing some step along the way that's necessary for containers but not servers.
Do I have to create a shell script or use docker-compose, or is there any way I could create/start a Vault container and save secrets in it all with terminal commands?
What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)
Am I understanding correctly that the docs discuss how to protect the Docker daemon when commands are issued (docker run,...) with a remote machine as the target? When controlling docker locally this does not concern me.
Running Docker swarm does not require this step either as the security between the nodes is handled by Docker automatically. For example, using Portainer in a swarm with multiple agents does not require extra security steps due to overlay network in a swarm being encrypted by default.
Basically, when my target machine will always be localhost there are no extra security steps to be taken, correct?
Remember that anyone who can run any Docker command can almost trivially get unrestricted root-level access on the host:
docker run -v/:/host busybox sh
# vi /host/etc/passwd
So yes, if you're using a remote Docker daemon, you must run through every step in that document, correctly, or your system will get rooted.
If you're using a local Docker daemon and you haven't enabled the extremely dangerous -H option, then security is entirely controlled by Unix permissions on the /var/run/docker.sock special file. It's common for that socket to be owned by a docker group, and to add local users to that group; again, anyone who can run docker ps can also trivially edit the host's /etc/sudoers file and grant themselves whatever permissions they want.
So: accessing docker.sock implies trust with unrestricted root on the host. If you're passing the socket into a Docker container that you're trusting to launch other containers, you're implicitly also trusting it to not mount system directories off the host when it does. If you're trying to launch containers in response to network requests, you need to be insanely careful about argument handling lest a shell-injection attack compromise your system; you are almost always better off finding some other way to run your workload.
In short, just running Docker isn't a free pass on security concerns. A lot of common practices, if convenient, are actually quite insecure. A quick Web search for "Docker cryptojacking" can very quickly find you the consequences.
Problem statement:
On the standalone On-Prem server, using nvidia docker. Whenever users create a new environment - they can potentially open up any port for all traffic from outside world(by passing our client firewall) if they don't specify local host variables.
So, how to protect such server tunneling request & instead make it open just for localhost? Any thoughts / ideas??
You can't give untrusted users the direct ability to run docker commands. For instance, anyone who can run a Docker command can run
docker run --rm -v /:/host busybox cat /host/etc/shadow
and then run an offline password cracker to get your host's root password. Being able to bypass the firewall is probably the least of your concerns.
I want to add Authentication and Authorization for the docker daemon for more security.
use case :-
Any command can be issued to the docker daemon by only valid user and that the user has the rights to execute the command. Here I want to use LDAP for user authentication.
Q :- Does docker has integration with LDAP for above use case ? If not then any work around to do this ?
I want help how to proceed on this. some starters will help.
Please advise me. Thanks for answer !
One way to protect docker daemon is to give access to the socket file only to users who should have access. Docker uses a group called docker, so adding a user to this group gives access to all docker commands gpasswd -a user docker. This however does not restrict the commands a user can run.
If you'd prefer LDAP authentication and restriction on commands, take a look at Docker remote API which is used internally by docker client as well. You can use it to control docker daemon, add your own authentication, restriction on commands, etc.