If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.
Related
I used the zabbix official docker-compose yaml to set up a set of zabbix system and I found the server as a monitoring target was not available. I searched the Internet and found there are people also encountered such problem.Someone said the agent container's IP or DNS name should be used as the server's. I tried and found it works. But I'm confused by the agent. Does it monitor the server container,the agent container or the host machine? If it only monitors the agent container itself,what's the purpose of it?
Does it monitor the server container,the agent container or the host machine?
Agent container.
If it only monitors the agent container itself,what's the purpose of it?
For testing. And for monitoring external stuff, with custom commands. Or you can connect stuff from host and monitor it, so just in all the cases you do not want or can't install agent on the host.
Everybody who configures a Dockerized Zabbix installation like yourself bumps into to this issue- and of course find themselves on StackExchange looking for the answers that should have been in the documentation.
The reason that the Zabbix Agent in the docker-compose install you're referring to can't initially connect is that both it and server it monitors both run in isolated containers. Separate containers cannot talk to each other on 127.0.0.1 (localhost) addresses. And that is actually a good thing!
I've reviewed the documentation in the repo you're talking about and it's sparse to say the least; it certainly could be better. But to be fair to Zabbix, their docker-compose install DOES work great when you get it running and can achieve pretty fair results quickly with little effort (and a bit of Googling ;-> ).
I actually found FURTHER pain connecting to containerized Zabbix Agents raised on different hosts outside of the docker-compose install you're referring to. Connectivity was being busted because the host the docker-compose install was raised on was NAT'ing out the traffic and presenting the wrong IP address. I've documented this issue HERE.
Dockerized Zabbix is a good thing; there is a purpose to it. I agree with you though that the documentation could be better though. Stick with it!
I have a docker container that has the NAT mapping 0.0.0.0:9055->80/tcp. From what I can tell, this should mean I can go to http://localhost:9055/ on my host machine, and it will be redirected to port 80 on the running Docker image. However, when I try this it times out.
If I connect to the instance and run docker exec -i 52806ceaf166 "ipconfig" to see what the image's private IP is, I get 172.28.27.31. When I try going to http://172.28.27.31/ on the host machine, it works!
I'd like to get the NAT mapping working since that's what all the tools assume works (such as Visual Studio, Kitematic, etc) and plus I don't want to have to worry about which containers use which IPs. Is there a way to fix this? Thanks!
PS: I'm new to Docker (just installed it today) so if any more info is needed (settings, versions, etc) just let me know how to get them and I'll add them to the post.
Was looking at the Docker Image I'm using, and I think this is what I'm running into:
This is a known issue that'll be addressed in the near future. The work around is fairly easy though.
Update: This was fixed in a recent Windows patch available through Windows Update.
I want to build a "centralized" development environment using docker for my development team (4 PHP developers)
I have one big Linux server (lot of RAM, Disk, CPU) that runs the containers.
All developers have an account on this linux server (a home directory) where they put (git clone) the projects source code. Locally (on their desktop machine) they have access to their home directory via a network share.
I want that all developers are able to work at the same time on the same projects, but viewing the result of their code editing in different containers (or set of containers for project who use linking containers)
The docker PHP development environment by itself is not a problem. I already tried something like that with success : http://geoffrey.io/a-php-development-environment-with-docker.html
I can use fig, with a fig.yml at the root of each project source code, so each developer can do a fig up to launch the set of containers for a given project. I can even use a different FIG_PROJECT_NAME environment variable for each account so I suppose that 2 developer can fig up the same project and their will be no container names collisions
Does it make sense ?
But after, I don't really know how to dynamically giving access to the running containers : when running there will be typically a web server in a container mapped to a random port in the host. How can I setup a sort of "dynamic DNS" to point to the running container(s), accessible, let say, through a nginx reverse proxy (the vhost creation and destroy has to be dynamic too) ?
To summarize, the workflow I would like to have :
A developer ssh into the dev env (the big linux server).
from his home directory he goes into the project directory and do a fig up
a vhost is created in the nginx reverse proxy, pointing to the running container and a DNS entry (or /etc/hosts entry) is added that is the server_name of this previously generated vhost.
The source code is mounted into the container from a host directory (-v host/dir:container/dir, so the developer can edit any file while the container is running
The result can be viewed by accessing the vhost, for example :
randomly-generated-id.dev.example.org
when the changes are ok, the developper can do a git commit/push
then the dev do a fig stop which in turn delete the nginx reverse proxy corresponding vhost and also delete the dynamic DNS entry.
So, how would to do a setup like this ? I mentioned tool like fig but if you have any other suggestions ... but remember that I would like to keep a lightweight workflow (after all we are a small team :))
Thanks for your help.
Does it make sense ?
yes, that setup makes sense
I would suggest taking a look at one of these projects:
https://github.com/crosbymichael/skydock
https://github.com/progrium/registrator
https://github.com/bnfinet/docker-dns
They're all designed to create DNS entries for containers as they start. Then just point your DNS server at it and you should get a nice domain name every time someone starts up an environment (I don't think you'll need a nginx proxy). But you might also be interested in this approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
Now, there's an even better option for you: Traefik. It will act as a reverse proxy, listening on 80/443, and will differentiate by hostname. Then, it will forward traffic dynamically, based on labels applied to the containers.
Here is a good solution to your issue:
1) Setup Traefik to listen to the docker daemon, forwarding based on ports
2) Ensure the frontend app servers for your devs are on the same docker network as traefik
3) Set a wildcard dns entry point to your server. For example: *.localdev.example.com.
4) On each container, set the hostname in that wildcard namespace. For example: jsmith-dev1localdev.example.com. This would be specified in a docker label such as: traefik.frontend.rule=Host:jsmith-dev1localdev.example.com.
This would allow developers to dynamically forward traffic by domain to their own dev containers.
Yes, I'm aware this is a 3 year old question. It still comes up in 2018 first on google for "centralized docker development server" so I'm going to post this anyways for the help of those currently searching.
Docker allows you to link containers by name.
I have two questions on this:
Supposed A (client) is linked to B (service), and B's port is exposed dynamically (i.e. the actual host port is determined by Docker, not given by the user). What happens if B goes down and is being restarted?
Does Docker update the environment variable on A?
Does Docker assign the very same port again to B?
Is A link to B broken?
…?
Besides that, it's quite clear that this works fine if both containers are run on the same host machine. Does linking containers also work across machine boundaries?
Have you looked into the ambassador pattern?
It's ideal for this concept where you may want App server linked to DB server but if you take DB server down then App server needs to be restarted also.
http://docs.docker.io/en/latest/use/ambassador_pattern_linking/
I would say: try ;).
At the moment, docker as no control whatsoever on the process once started as it execve(3) without fork. It is not possible to update the env, that's why the links need to be done before the container runs and can't be edited afterward.
Docker will try to reassign the same port to B, but there is no warranty as an other container could be using it.
What do you mean by 'broken'? If you disabled the networking between unlinked container, it should still be working if you stop/start a container.
No, you can't link container across network yet.
I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.
This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.
As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.