The new version of Docker (version 1.10) includes a DNS server to pass alias information from other hosts on the same network. There used to be hosts file entries for resolving linked containers (or containers on the same network). I am wondering if it is possible to use this embedded DNS server on an overlay network? I have looked in the documentation (and in issues) and cannot find information about this.
So the way the new embedded DNS "server" works is that it isn't a formal server. It's just an embedded listener for traffic to 127.0.0.11:53 (udp of course). When docker sees that query traffic on the container's network interface, it steps in with its embedded DNS server and replies with any answers it might have to the query. The documentation has some options you can set to affect how this DNS server behaves, but since it only listens for query traffic on that localhost address, there is no way to expose this to an overlay network in the way that you are thinking. However this seems to be a moving target, and I have seen this question before in IRC, so it may one day be the case that this embedded DNS server at least becomes pluggable, or possibly exposable in the way you would like.
Related
I have several docker containers with some web applications running via docker compose. One of the containers is a custom DNS server with Bind and Webmin installed. Webmin gives a nice web UI allowing me to update Bind DNS configuration without directly modifying the files or SSHing into the container. I have docker setup to lookup DNS in this order:
my docker dns server
my companies internal dns server
google dns server
I have one master zone file for top level domain "example.com" defined in dns server 1. I added an address for server1.example.com and dns resolves correctly. I want other subdomains to be resolved from my companies internal dns server.
server1.example.com - resolves correctly
server2.example.com - this host is not referenced in the zone file for docker dns server. I would like to somehow delegate this to my companies dns server (server 2)
The goal is I should be able to do software development for web applications and deploy them on my docker containers. The code makes internal calls to other "example.com" hosts. I want some of those calls to get directed back to other docker containers rather than the real server because I am developing code on both and want to test it end to end.
I don't want to (and can't) modify my companies dns configuration. I am not an expert in bind or dns setup and looking for the simplest solution.
What configuration can achieve this?
I guess the workaround is to use fully qualified name when creating the zone file. Instead of creating a master zone example.com and listing server1 inside that zone I am creating a master zone with server1.example.com. It means I have to create a zone file for every server but I guess its ok to manage with a smaller number of hosts. server2.example.com then doesnt fall inside of a zone and gets resolved using the next dns server in the chain.
I have a server and I am using Ubuntu 20.04, nginx , mosquitto and node-red and docker , let's call the website http://mywebsite.com. The problem that I am facing that I have created a client lets call it client1 in docker so the URL will be http://mywebsite.com/client1
and I want to establish an MQTT connection via mosquitto and I'm sending the data on topic test
The problem that on node red node of MQTT when I write the IP address of my mosquitto container it works
But if I change the IP address 192.144.0.5 with mywebsite.com/client1 I can't connect to mosquitto and I can't send or receive any form of data
any idea on how to solve this problem
OK, you are going to have several problems here.
You can not do path based proxying with MQTT. If you want to have multiple MQTT brokers (1 per client) bound to a single public facing domain/IP address then they are all going to have to run on separate ports (other than the default 1883).
Nginx can do MQTT protocol proxying (e.g. like this), so you can use this to expose the different ports and forward them to the separate instances of mosquitto, but even if you had a different hostname (all pointing at the same IP address) nginx has no way to know which host name was used because there is no equivalent to the HOST HTTP header to direct it. If you were to use MQTT with TLS then you may be able to get it to work with SNI, but I've never seen anybody do that yet (possible docs for SNI based routing here) It works, explanation about how to do it here.
If you use MQTT over Websockets then you should be able to use hostname based routing.
Path based proxying for Node-RED currently doesn't work properly if you enable admin authentication, because the admin auth tokens are currently stored in browser local storage and only scoped to the hostname, not the hostname + path. This will mean that a client will only ever be able to log into one instance at a time.
You can work round this by using host based proxying, e.g. http://client1.mywebsite.com
A fix for this is on the backlog for Node-RED, probably (no promises) to be looked at after version 1.2.0 ships
I am using active Zabbix agents that auto-register themselves to the Zabbix server.
Everything goes well until the DHCP changes the host IP, the host then becomes unavailable in Zabbix... Looking at the host under the hosts list in Zabbix frontend, I can see that it had the old IP.
Is there any way to solve this?
This means that you are actually not using active items. I'd suggest cloning your current template and changing items, LLD rules and LLD prototypes to "Zabbix agent (active)" - then agent IP address changes will not be a concern.
I'm looking for a way to change what the reverse DNS resolves to in Docker.
If I set my container's FQDN to foo.bar I expect a reverse DNS lookup for its IP to resolve to foo.bar, but it always resolves to <container_name>.<network_name>.
Is there a way I can change that?
Docker's DNS support is designed to support container discovery within a cluster. It's not an application traffic management solution, so features are limited.
For example it's possible to configure a DNS wildcard which resolves "*.foo.bar" urls to a server running a container savvy load balancer solution (A load balancer that knows where all the containers, associated with each application, are located and running).
That load balancer can then route traffic based on the incoming "Hostname" HTTP header:
"app1.foo.bar" -> "App1 Container1", "App1 Container2"
"app2.foo.bar" ->
"App2 Container1", "App2 Container2", "App2 Container3"
For a practical implementation take a look at how Kubernetes does load balancing (This is an advanced topic):
http://kubernetes.io/docs/user-guide/ingress/
im running windows 7 as host and ubuntu 11.04 as guest.
Which would be the best way to access a webserver on a guest from host via a defined url
(and vise versa)
e.g http://myvirtualbox and http://myhost
For now i have configured a network bridge, but the guest is gets a different ip assigned everytime. A simple solution would be to assign a staic ip and configure a name resolution localy on each machine, but maybe there is an other way (internal netwok perhaps?)
You can modify the hosts file on machines to map the hostname to the IP addresses of the machines (and change their IP addresses to static).
Or another more flexible (more hosts, faster integration for new machines) option: you're going to want to set up a DNS service, configure the machines to work with it, then add the IP of the DNS as a name server in your network adapter for the hosts to use.
That will be a more flexible, maintainable and scalable solution.
From the looks of it though, if you want a 10 minute fix, go for the first option. There are lots of tutorials on it.