Is it feasible to control Docker from inside a container? - docker

I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.

This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.

As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.

Related

How to forward host traffic to more than one container?

I have a windows machine. I am running ubuntu using the virtual box on top of it. From windows, I am sending certain information to ubuntu over UDP on a specific port. I am running multiple Docker containers in ubuntu. I want to forward this data to all the containers from ubuntu. Could someone please specify a method to achieve this.
I am answering to my question.
I have written a script in python which listens on the specified port and broadcasts it over the docker network. Every container created on that network receives it.
Despite your own answer, you could use nginx to achieve such behavior. Dont need to rewrite what is already implemented but since your script work's i guess you will stick with your solution. Consider this answer mainly for future reader's therefore.

Access to internal infrastructure from Kubernetes

If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.

Multi-host Docker network with Swarm-mode and without swarm

I am migrating legacy application deployed on two physical servers[web-app(node1) and DB(node2)].
Though following blog fullfilled my requirement. but still some questions
https://codeblog.dotsandbrackets.com/multi-host-docker-network-without-swarm/#comment-2833
1- For mentioned scenario web-app(node1) and DB(node2), we can use expose port options and webapp will use that port, why to create overlay network?
2- By using swarm-mode with replica=1 we can achieve same, so what advantage we will get by using creating overlay network without swarm mode?
3- if node on which consul is installed, it goes down our whole application is no more working.(correct if understanding is wrong)
4- In swarm-mode if manager node goes down(which also have webapp) my understanding is swarm will launch both containers on available host? please correct me if my understanding is not correct?
That article is describing an outdated mode of operation for 'Swarm'. What's described is 'Classic Swarm' that needed an external kv store (like consul) but now Docker primarily uses 'Swarm mode' (which is an orchestration capability built in to the engine itself). To answer what I think your questions are:
I think you're asking, if we can expose a port for a service on a host, why do we need an overlay network? If so, what happens if the host goes down and the container gets re-scheduled to another node? The overlay network takes care of that by keeping track of where containers are and routing traffic appropriately.
Not sure what you mean by this.
If consul was a key piece of discovery infra then yes, it would be a single point of failure so you'd want to run it HA. This is one of the reasons that the dependency on an external kv was removed with 'Swarm Mode'.
Not sure what you mean by this, but maybe about rebalancing? If so then yes, if a host (with containers) goes down, those containers will be re-scheduled on another node.

Failing to see how ambassador pattern enhances modularity / simplicty of container architecture in Docker

I fail to see how implementing the ambassador pattern would help us simplify / modularize the design of our container architecture.
Let's say that I have a database container db on host A and is used by a program db-client which sits on host B, which are connected via ambassador containers db-ambassador and db-foreign-ambassador over a network:
[host A (db) --> (db-ambassador)] <- ... -> [host B (db-forgn-ambsdr) --> (db-client)]
Connections between containers in the same machine, e.g. db to db-ambassador, and db-foreign-ambassador to db-client are done via Docker's --link parameter while db-ambassador and db-foreign-ambassador talks over the network.
But , --link is just a fancy way of inserting ip addresses, ports and other info from one container to another. When a container fails, the other container which is linked to it does not get notified, nor will it know the new IP address of the crashing container when it restarts. In short, if a container which is linked to another went dead, the link is also dead.
To consider my example, lets say that db crashed and restarts, thus get assigned to a different IP. db-ambassador would have to be restarted too, in order to update the link between them... Except you shouldn't. If db-ambassador is restarted, the IP would have changed too, and foreign-db-ambassador won't know where to reach it at the new IP address.
Quoting an article in the Docker's docs about the ambassador pattern,
When you need to rewire your consumer to talk to a different Redis
server, you can just restart the redis-ambassador container that the
consumer is connected to.
This pattern also allows you to transparently move the Redis server to
a different docker host from the consumer.
it seems like this is exactly the problem it is trying to solve. Which, as far as my understanding goes, it totally didn't. Not if you consider --link is only useful as long as the linked container doesn't crash. The option to start a crashing node on its previous IP would have been a good workaround if supported, at least for a small/medium sized architecture.
Am I missing something obvious?
Jérôme had some good slides (11-33) on how ambassadors are better than other ways of service discovery (i.e. DNS, key-value stores, bind-mount config file, etc.) in his slide deck on "Shipping Applications to Production in Containers with Docker". He also has some suggestions for how to solve the problem I think you are mentioning, especially Docker Grand Ambassador looks promising.

Linking containers in Docker

Docker allows you to link containers by name.
I have two questions on this:
Supposed A (client) is linked to B (service), and B's port is exposed dynamically (i.e. the actual host port is determined by Docker, not given by the user). What happens if B goes down and is being restarted?
Does Docker update the environment variable on A?
Does Docker assign the very same port again to B?
Is A link to B broken?
…?
Besides that, it's quite clear that this works fine if both containers are run on the same host machine. Does linking containers also work across machine boundaries?
Have you looked into the ambassador pattern?
It's ideal for this concept where you may want App server linked to DB server but if you take DB server down then App server needs to be restarted also.
http://docs.docker.io/en/latest/use/ambassador_pattern_linking/
I would say: try ;).
At the moment, docker as no control whatsoever on the process once started as it execve(3) without fork. It is not possible to update the env, that's why the links need to be done before the container runs and can't be edited afterward.
Docker will try to reassign the same port to B, but there is no warranty as an other container could be using it.
What do you mean by 'broken'? If you disabled the networking between unlinked container, it should still be working if you stop/start a container.
No, you can't link container across network yet.

Resources