Sharing Docker daemon between WSL instances - docker

Regarding running Docker from within WSL without Docker Desktop, there is a comprehensive article here. However,
When it comes to sharing the Docker daemon between WSL instances, the article only touches the starting bits. This is to ask for the whole process.
First of all, why configuring it to use a socket stored in the shared /mnt/wsl directory, instead of commonly suggested exposing the 2375 port from docker? The reason I'm asking is that I found it challenging to find something to be used as the shared /mnt/wsl directory between different WSL instances, as trying to make use of exiting Windows drive (ntfs share mount) will be peoples' first instinct, however, it won't work. I tried that, trying to call mknod to create a device file in ntfs shared folder, and got:
mknod: /mnt/d/foobar: Operation not supported
Is it because,
The issue is that Docker runs in 2375 but its bound just for localhost in some setups (WSL2 backend / Linux container)
Is it still true? But even that, it's fine with my above case as I'm only sharing the Docker daemon between WSL instances on the same localhost.
So, this is to ask for a total solution for sharing the Docker daemon between WSL instances, that is practical and anyone can follow. thx!

Related

is docker-py/dockerpy secure to use?

I've been reading about security issues with building docker images within a docker container by mounting the docker socket.
In my case, I am accessing docker via an API , docker-py.
Now I am wondering, are there security issues with building images using docker-py on a plain ubuntu host (not in a docker container) since it also communicates on the docker socket?
I'm also confused as to why there would be security differences between running docker from the command line vs this sdk, since they both go through the socket?
Any help is appreciated.
There is no difference, if you have access to the socket, you can send a request to run a container with access matching that of the dockerd engine. That engine is typically running directly on the host as root, so you can use the API to get root access directly on the host.
Methods to lock this down include running the dockerd daemon inside of a container, however that container is typically privileged which itself is not secure, so you can gain root in the other container and use the privileged access to gain root on the host.
The best options I've seen include running the engine rootless, and an escape from the container would only get you access to the user the daemon is running as. However, realize rootless has it's drawbacks, including needing to pre-configure the host to support this, and networking and filesystem configuration being done at the user level which has functionality and performance implications. And the second good option is to run the build without a container runtime at all, however this has it's own drawbacks, like not having a 1-for-1 replacement of the Dockerfile RUN syntax, so your image is built mainly from the equivalent of COPY steps plus commands run on the host outside of any container.

Deploying and Securing Docker Containers and Server OS

I am running a CENTOS Server and will be installing the Docker Engine on top of that where needless to say, I will be setting up my containers. I'll initially be setting up two containers: (1) serve my web pages (2) run my database.
My thought process was that I would install FirewallD on the CentOS. My questions are the following:
Do I need to install some sort of firewall within the containers itself? If so, can someone at a high-level tell me how this is done and what firewall I would be installing at the container level?
Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
As you can tell, this will be my first developing with containers, so do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
I would appreciate it if I could get some guidance here as I'm tasked to do this, but not sure of the correct path.
Thanks again.
I really have not tried much as I'm not sure where to begin. Currently I have just been doing some research on my use case.
Q) Do I need to install some sort of firewall within the containers itself?
A) No, not really. Containers can only communicate via the ports the configuration specify to open.
Q) Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
A) TCP/IP port 443 if you want to access the daemon via the REST API. Other wise, and probably more secure, leave remote access off. SSH into the machine and interact with the daemon locally.
Q) ...do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
A) Create the containers on development, push the image to a repository (Docker Hub is one, AWS ECR is another, you can also host your own). Access the server, then finally pull the images from the repository onto the server.
As for where to begin; At the beginning :D. But really, https://docs.docker.com/get-started/ has a 'getting starting' to start you off. Linux Academy, A Cloud Guru, Lyda, Udemy, and other similar learning resource are all solid starting points.
Hope this helps you on your journey.

Isolated Docker environments via SSH

I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.

Easy, straightforward, robust way to make host port available to Docker container?

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

Can a docker process access programms on the host with ipc

I´m working on a clustered tomcat system that uses MQSeries.
Today MQSeries is accessed in bindings mode, i.e. via IPC and tomcat and mqeries run on the same host without any virtualization/docker support.
I´d like to transform that to a solution, where mqseries runs on the host (or possible in a docker container) the the tomcat instances run in docker containers.
It´s possible to access mqseries in client mode (via a tcp connection) and this seems to be the right solution.
Would it still be possible to access mqseries from the docker container via ipc, i.e. create exceptions for the ipc namespace separation? Is anything like that planned for docker?
Since docker 1.5 this is possible with the flag --ipc=host like in
docker run --ipc=host ubuntu bash
This answer suggests how IPC can be enabled with a source-code modification to Docker. As far as I (and the other answers there) know, there is no built-in feature.
Specificically, he says he commented out this line which makes Docker create a separate IPC namespace.
Rebuilding Docker is a bit tedious because it brings in dozens of other things during the build, but if you follow the instructions it's straightforward.

Resources