I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.
Related
I am new to the containers topic and would appreciate if this forum is the right place to ask this question.
I am learning dockers and containers and I now have some skills using the docker commands and dealing with containers. I understand that docker has two main parts, the docket client (docker.exe) and the docker server (dockerd.exe). Now in the development life both are installed on my local machine (I am manually installed them on windows server 2016) followed Nigel Poulton tutorial here https://app.pluralsight.com/course-player?clipId=f1f27565-e2bf-4e58-96f3-bc2c3b160ec9. Now when it comes to the real production life, then, how would I configure my docker client to communicate with a remote docker server. I tried to make some research on the internet but honestly could not find a simple answer for this question. I installed docker for desktop on my windows 10 machine and noticed that it created a hyper-v machine which might be Linux machine, my understanding is that this machine has the docker server that my docker client interacts with but do not understand how is this interaction gets done.
I would appreciate if I get some guidance or clear answer to my inquiries.
In production environments you never have a remote Docker daemon. Generally you interact with Docker either through a dedicated orchestrator (Kubernetes, Docker Swarm, Nomad, AWS ECS), or through a general-purpose system automation tool (Chef, Ansible, Salt Stack), or if you must by directly ssh'ing to the system and running docker commands there.
Remote access to the Docker daemon is something of a security disaster. If you can access the Docker daemon at all, you can edit any file on the host system as root, and pretty trivially take over the whole thing. (Google "Docker cryptojacking" for some real-world examples.) In principle you can secure it with mutual TLS, but this is a tricky setup.
The other important best practice is that Docker images should be self-contained. Don't try to deploy a Docker image to production, and also separately copy your application code. The same Ansible setup that can deploy a Docker container can also install Node directly on the target system, avoiding a layer; it's tricky to copy application code into a Kubernetes volume, especially when Kubernetes pods can restart outside your direct control. Deploy (and test!) your images with all of the code COPYd in a Dockerfile, minimizing the use of bind mounts.
I am running a CENTOS Server and will be installing the Docker Engine on top of that where needless to say, I will be setting up my containers. I'll initially be setting up two containers: (1) serve my web pages (2) run my database.
My thought process was that I would install FirewallD on the CentOS. My questions are the following:
Do I need to install some sort of firewall within the containers itself? If so, can someone at a high-level tell me how this is done and what firewall I would be installing at the container level?
Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
As you can tell, this will be my first developing with containers, so do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
I would appreciate it if I could get some guidance here as I'm tasked to do this, but not sure of the correct path.
Thanks again.
I really have not tried much as I'm not sure where to begin. Currently I have just been doing some research on my use case.
Q) Do I need to install some sort of firewall within the containers itself?
A) No, not really. Containers can only communicate via the ports the configuration specify to open.
Q) Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
A) TCP/IP port 443 if you want to access the daemon via the REST API. Other wise, and probably more secure, leave remote access off. SSH into the machine and interact with the daemon locally.
Q) ...do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
A) Create the containers on development, push the image to a repository (Docker Hub is one, AWS ECR is another, you can also host your own). Access the server, then finally pull the images from the repository onto the server.
As for where to begin; At the beginning :D. But really, https://docs.docker.com/get-started/ has a 'getting starting' to start you off. Linux Academy, A Cloud Guru, Lyda, Udemy, and other similar learning resource are all solid starting points.
Hope this helps you on your journey.
Here's my scenario.
I have 2 Docker containers:
C1: is a container with Ruby (but it could be anything else) that prepares data files on which it must perform a calculation in Julia language
C2: is a container with Julia (or R, or Octave...), used to perform the calculation, so as to avoid installing Julia on the same system or container that run Ruby code
From the host, obviously, I have no problem doing the processing.
Usually when two containers are linked (or belong to the same network) they communicate with each other via a network exposing some door. In this case Julia does not expose any door.
Can I run a command on C2 from C1 similar to what is done between host and C2?
If so, how?
Thanks!
Technically yes, but that's probably not what you want to do.
The Docker CLI is just an interface to the Docker service, which listens at /var/run/docker.sock on the host. Anything that can be done via the CLI can be done by directly communicating with this server. You can mount this socket into a running container (C1) as a volume to allow that container to speak to its host's docker service. Docker has a few permissions that need to be set to allow this; older versions allow containers to run in "privileged" mode, in which case they're allowed to (amongst other things) speak to /var/run/docker.sock with the authority of the host. I believe newer versions of Docker split this permission system up a bit more, but you'd have to look into this. Making this work in swarm mode might be a little different as well. Using this API at a code level without installing the full Docker CLI within the container is certainly possible (using a library or coding up your own interaction). A working example of doing this is JupyterHub+DockerSpawner, which has one privileged Hub server that instantiates new Notebook containers for each logged in user.
I just saw that you explicitly state that the Julia container has no door/interface. Could you wrap that code in a larger container that gives it a server interface while managing the serverless Julia program as a "local" process within the same container?
I needed to solve the same problem. In my case, it all started when I needed to run some scripts located in another container via cron, I tried the following scenarios with no luck:
Forgetting about the two-containers scenario and place all the logic in one container, so inter-container execution is no longer needed: Turns out to be a bad idea since the whole Docker concept is to execute single tasks in each container. In any case, creating a dockerfile to build an image with both my main service (PHP in my case) and a cron daemon proved to be quite messy.
Communicate between containers via SSH: I then decided to try building an image that would take care of running the Cron daemon, that would be the "docker" approach to solve my problem, but the bad idea was to execute the commands from each cronjob by opening an SSH connection to the other container (in your case, C1 connecting via SSH to C2). It turns out it's quite clumsy to implement an inter-container SSH login, and I kept running into problems with permissions, passwordless logins and port routing. It worked at the end, but I'm sure this would add some potential security issues, and I didn't feel it was a clean solution.
Implement some sort of API that I could call via HTTP requests from one container to another, using something like Curl or Wget. This felt like a great solution, but it ultimately meant adding a secondary service to my container (an Nginx to attend HTTP connections), and dealing with HTTP requisites and timeouts just to execute a shell script felt too much of a hassle.
Finally, my solution was to run "docker exec" from within the container. The idea, as described by scnerd is to make sure the docker client interacts with the docker service in your host:
To do so, you must install docker into the container you want to execute your commands from (in your case, C1), by adding a line like this to your Dockerfile (for Debian):
RUN apt-get update && apt-get -y install docker.io
To let the docker client inside your container interact with the docker service on your host, you need to add /var/run/docker.sock as a volume to your container (C1). With Docker compose this is done by adding this to your docker service "volumes" section:
- /var/run/docker.sock:/var/run/docker.sock
Now when you build and run your docker image, you'll be able to execute "docker exec" from within the docker, with a command like this, and you'll be talking to the docker service on the host:
docker exec -u root C2 /path/your_shell_script
This worked well for me. Since, in my case, I wanted the Cron container to launch scripts in other containers, it was as simple as adding "docker exec" commands to the crontab.
This solution, as also presented by scnerd, might not be optimal and I agree with his comments about your structure: Considering your specific needs, this might not be what you need, but it should work.
I would love to hear any comments from someone with more experience with Docker than me!
I´m working on a clustered tomcat system that uses MQSeries.
Today MQSeries is accessed in bindings mode, i.e. via IPC and tomcat and mqeries run on the same host without any virtualization/docker support.
I´d like to transform that to a solution, where mqseries runs on the host (or possible in a docker container) the the tomcat instances run in docker containers.
It´s possible to access mqseries in client mode (via a tcp connection) and this seems to be the right solution.
Would it still be possible to access mqseries from the docker container via ipc, i.e. create exceptions for the ipc namespace separation? Is anything like that planned for docker?
Since docker 1.5 this is possible with the flag --ipc=host like in
docker run --ipc=host ubuntu bash
This answer suggests how IPC can be enabled with a source-code modification to Docker. As far as I (and the other answers there) know, there is no built-in feature.
Specificically, he says he commented out this line which makes Docker create a separate IPC namespace.
Rebuilding Docker is a bit tedious because it brings in dozens of other things during the build, but if you follow the instructions it's straightforward.
After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH