IP (not PORT!) forwarding inside Docker container - docker

We have several containers which are connected to the same custom network with ipv6 support. Now we need one of those containers to act as the default gateway (in short, the default gateway of our custom network should be set to the ip of this one container).
This is rather impossible (or at least we did not find an answer to that). There is another option however.
Since we need to forward a specific ip address to another one we can simply use:
ip -6 route add xxxx:yy:: via aaaa:bb::c
in each container. That would be a rather unelegant but satisfaying solution. It does not work like that. I suppose we would need to restart the container network. I could not find a way to do this without restarting the container which would of course nullifies the above command.
Any idea on how to do this properly? I apologise for my possible direct tone in this message, I haven't slept much in the last few days and I am thinking of changing careers. How does rocket scientist sound? gotta be easier that working with docker...

so, no network restart was necessary. I had an error in the command syntax. Since I wanted to redirect all IPs to that specific one I had to write it like that:
ip -6 route add xxxx:yy::/64 via aaaa:bb::c
You can easily put this in a bash script and execute it every time a container starts:
apt-get update && apt-get install iproute2 -y
ip -6 route add xxxx:yy::/64 via aaaa:bb::c
Quick side note, for the IP command to be able to run the container has to run in "privileged" mode.

Related

Docker - Accessing host from private network

My team and I have a lot of small services that work with each other to get the job done. We came up with some internal tooling and some some solutions that work for us, however we are always trying to improve.
We have created a docker setup where we do something like this:
That is, we have a private network in place where the services can call each other by name and that works.
In this situation if service-a call service-b like http://localhost:6002 it wouldn't work.
We have a scenario where we want to work on a module and run the rest on docker. So we could run the service-a directly out of our IDE for example and leave service-b and service-c on docker. The last service (service-c) we would than reference as localhost:6003 from the host network.
This works fine!
However things can get "out of hand" the more we go down the line. In the example we only have 3 services but our longest chain is like 6 services. Supposing I want to work on the one before the last, I have to start all services that came before in order to simulate a complete chain. (In most of the cases we work through APIs, which render the point mute, but bear with me). Something like this:
The QUESTION
Is there a way to allow a situation such as this one?
In order to run part of the services as docker containers and maybe one service from my IDE for example?
Or would for that be necessary that I put all of them on the host network for them all to be able to call each other through localhost?
I appreciate any help!
You can use --add-host service-b:$(hostname -i) to push the host IP address into a container so that the container doesn't need to know about which service is not running in docker.
You could set up your docker compose file to accept an argument as to WHICH service you want to do...
extra_hosts:
- ${HOST_SVC:fakehost}:${HOST_IP}
But honestly- the easiest solution is probably just to set up the service you want to work on with the source code mounted in, run them all in docker, and restart the container as needed.

How to forward host traffic to more than one container?

I have a windows machine. I am running ubuntu using the virtual box on top of it. From windows, I am sending certain information to ubuntu over UDP on a specific port. I am running multiple Docker containers in ubuntu. I want to forward this data to all the containers from ubuntu. Could someone please specify a method to achieve this.
I am answering to my question.
I have written a script in python which listens on the specified port and broadcasts it over the docker network. Every container created on that network receives it.
Despite your own answer, you could use nginx to achieve such behavior. Dont need to rewrite what is already implemented but since your script work's i guess you will stick with your solution. Consider this answer mainly for future reader's therefore.

Access to internal infrastructure from Kubernetes

If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.

Linking containers in Docker

Docker allows you to link containers by name.
I have two questions on this:
Supposed A (client) is linked to B (service), and B's port is exposed dynamically (i.e. the actual host port is determined by Docker, not given by the user). What happens if B goes down and is being restarted?
Does Docker update the environment variable on A?
Does Docker assign the very same port again to B?
Is A link to B broken?
…?
Besides that, it's quite clear that this works fine if both containers are run on the same host machine. Does linking containers also work across machine boundaries?
Have you looked into the ambassador pattern?
It's ideal for this concept where you may want App server linked to DB server but if you take DB server down then App server needs to be restarted also.
http://docs.docker.io/en/latest/use/ambassador_pattern_linking/
I would say: try ;).
At the moment, docker as no control whatsoever on the process once started as it execve(3) without fork. It is not possible to update the env, that's why the links need to be done before the container runs and can't be edited afterward.
Docker will try to reassign the same port to B, but there is no warranty as an other container could be using it.
What do you mean by 'broken'? If you disabled the networking between unlinked container, it should still be working if you stop/start a container.
No, you can't link container across network yet.

Is it feasible to control Docker from inside a container?

I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.
This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.
As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.

Resources