So I have set up a Jupyterhub deployed with docker. As described here.
The users would like to connect to a samba share from within their notebooks. In order for this to work, I wanted to write a small bash script. The scripts asks the user for their credentials to connect to the samba share drive. So here are my questions:
I have to open port 445 and 139 of the notebook server container, direct them through the jupyterhub container to the system's ports 445 and 139. Where and how can I achieve this in the docker deployed Jupyterhub framework that I have?
I have to grant the users SYS_ADMIN and DAC_READ_SEARCH capabilities. Suppose I don't trust the users. Do you think this is a good idea? What is the worst case scenario... please scare me. :D
Is there a safer way like some service running in an extra container that handles the sambashare request, creates a docker volume and mounts it during the runtime of the user's container to that same user's container?
Related
I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.
I would like to access a Windows file share share (SMB3) from a docker container, but I do not want to compromise the security of the host machine. All the guides I have read state that I need to use either the --privileged flag or --cap-add SYS_ADMIN capability.
Here is the command I use:
mount -t cifs -o
username='some_account#mydomain.internal',password='some_password'
//192.168.123.123/MyShare /mnt/myshare
Which results in the message:
Unable to apply new capability set.
When I apply the --cap-add SYS_ADMIN capability the mount command works fine, but I understand this exposes the host to obvious security vulnerabilities.
I have also read the suggestion in this StackOverflow question (Mount SMB/CIFS share within a Docker container) to mount the volume locally on the server that runs docker. This is undesirable for two reasons, firstly, the container is orchestrated by a Rancher Kubernetes cluster and I don't know how to achieve what is described by nPcomp using Rancher, and two, this means the volume is accessible to the docker host. I'd prefer only the container have access to this share via the credentials given to it via secrets.
My question is: is there way to mount a CIFS/SMB3 share in a docker container (within Kubernetes) without exposing the host to privilege escalation vulnerabilities and protecting the credentials? Many thanks.
After more research I have figured out how to do this. There is a Container Storage Interface (CSI) driver for SMB called SMB CSI Driver for Kubernetes (https://github.com/kubernetes-csi/csi-driver-smb).
After installing the CSI driver using helm (https://github.com/kubernetes-csi/csi-driver-smb/tree/master/charts) you can follow the example at https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md (Option #2 Create PV/PVC) to create a Persistent Volume (PV) and Persistent Volume Claim (PVC) which mounts the SMB3 share.
Then you create your container and give it the relevant Persistent Volume Claim, specifying you want to mount it as /mnt/myshare etc.
I tested this and it gets deployed to multiple worker nodes automatically and works well, without needing the privileged flag or --cap-add SYS_ADMIN to be given to the containers.
This supports SMB3 and even authentication & encryption. To enable encryption go to your Windows Server > File and Storage Services, select the share, Properties > Settings > Encrypt Data Access.
Wireshark shows all the SMB traffic is encrypted. Only thing I don't recall is if you have to install cifs-utils manually first, since I had already done this on all my nodes I wasn't able to test.
Hope this helps somebody.
Am I understanding correctly that the docs discuss how to protect the Docker daemon when commands are issued (docker run,...) with a remote machine as the target? When controlling docker locally this does not concern me.
Running Docker swarm does not require this step either as the security between the nodes is handled by Docker automatically. For example, using Portainer in a swarm with multiple agents does not require extra security steps due to overlay network in a swarm being encrypted by default.
Basically, when my target machine will always be localhost there are no extra security steps to be taken, correct?
Remember that anyone who can run any Docker command can almost trivially get unrestricted root-level access on the host:
docker run -v/:/host busybox sh
# vi /host/etc/passwd
So yes, if you're using a remote Docker daemon, you must run through every step in that document, correctly, or your system will get rooted.
If you're using a local Docker daemon and you haven't enabled the extremely dangerous -H option, then security is entirely controlled by Unix permissions on the /var/run/docker.sock special file. It's common for that socket to be owned by a docker group, and to add local users to that group; again, anyone who can run docker ps can also trivially edit the host's /etc/sudoers file and grant themselves whatever permissions they want.
So: accessing docker.sock implies trust with unrestricted root on the host. If you're passing the socket into a Docker container that you're trusting to launch other containers, you're implicitly also trusting it to not mount system directories off the host when it does. If you're trying to launch containers in response to network requests, you need to be insanely careful about argument handling lest a shell-injection attack compromise your system; you are almost always better off finding some other way to run your workload.
In short, just running Docker isn't a free pass on security concerns. A lot of common practices, if convenient, are actually quite insecure. A quick Web search for "Docker cryptojacking" can very quickly find you the consequences.
I'm opening my Docker server to more users and I'm facing this problem: when I do docker ps with a user that is not the author of the container (e.g paul), I see all the containers and can interact with it (stop, kill, etc), and it's not what I would like.
What could be a good way to restrict containers to their original user and then not have access to all of them on the server, so when I do docker ps I see just the containers ran by paul and not jack or jess?
All my containers are started with different users, none with root.
Anyone with access to Docker is equivalent to root.
Consider that I can run something like this:
docker run -v /:/host alpine
Now I can edit any file on your host with root privileges.
Docker is fundamentally not a multi-user tool. Either you trust everybody, or you use virtualization to give everyone their own individual Docker instance, or you front Docker with some sort of API proxy that limits the things people can do.
Because it's not a multi-user tool, Docker doesn't really keep track of the user that started a container, so there's no way to filter on that information.
I am writing a small application with flask which is meant to interact with the docker api in order to run containers on demand. I would like to deploy this application within a docker container. However, I understood that it is relatively bad to mount the docker socket, as it has root privilege on the local host.
Is there a proper method to access the docker api within a container in order to avoid this caveat ?
Why is mounting the Docker socket to an unprivileged container a bad idea?
In order to mount the unix socket to your Docker container, you would need to change the permissions of the Docker daemon socket. This, obviously, could give non-root users the ability to access the Docker daemon, which might be a problem if you are worried about privilege escalation attacks. (source)
Do I really need to secure the Docker socket?
This depends on your usecase. If you have many users on your server, and are particularly worried about a non-privileged user affecting your app, then definitely secure the socket. If this is a virtual machine that is completely dedicated to the app, insecure might be easier.
How do I interact with the socket insecurely?
Just change the permissions (described here) and then mount the socket to the container. It's that simple.
How do I interact with the socket securely?
I think there are two good ways of doing this:
Restart the Docker Daemon with TLS Authentication enabled. Rather than accessing the unix socket, access it using HTTPS with a signed SSL key. More instructions on setting that up can be found here.
Use an Authorization Plugin on the unix socket as described here.