MDNS subdomains with Avahi - ubuntu-9.04

I have a machine running avahi-daemon on Ubuntu Jaunty. It's currently responding to requests for itself on hostname.local, but I would like it to run a webapp that ends up publishing MDNS addresses for other hosts which aren't on the local network. I would like for these to be in a subdomain of .local, if possible.
Right now, if I edit the /etc/avahi/hosts file and put in an address -> host mapping, it only works if there's no subdomain component. In other words, the FQDN foo.bar.local won't resolve from other hosts, but bar.local will. Is this a limitation of the MDNS clients, or the server? And can it be fixed?

That's a limitation of Avahi daemon's static host functionality. You need to use another method that supports registering more than a single label such as this Python script.

Related

How to set up subdomains with traefik and docker in a local network?

I have a raspberry pi plugged into my home router, running Ubuntu 20.04 and Docker.
I gave it a fixed ip and its hostname in the local network is raspy.local. I can access docker containers via raspy.local:<portnumber>.
What I would like to do is to have docker containers be reachable via subdomains, like influxdb.raspy.local or traefik.raspy.local etc. The only solution that worked was to run traefik as a docker container, set Host(`<subdomain>.raspy.local`) rules and edit the /etc/hosts file on my laptop so that the subdomains point to the IP address of the raspberry pi.
This is a bad solution because I have to edit the /etc/hosts file every time I make a change and anyways this cannot be done on all the devices on my network (e.g. I cannot to it on smartphones).
What is the proper way to do it?
(I have found other similar questions here on SO, but I didn't find one with information on how to do this within a local network)
You need to setup a local DNS server:
Set a static IP on your RPi and install PiHole in it.
Set an A record for each of the subdomains you want in the PiHole DNS configuration, pointing to the IP of the device running Traefik (same RPi in your case) (eg: A subdomain.raspy.local -> 192.168.0.xxx)
Set your main router DNS IP Address to the address of your PiHole Server.
Now every device connected to the router is going to be able to reach the Traefik server using domain names.
#30daysofstackoverflow

Docker server networking - reject incoming connections but allow outgoing

We use Docker containers to deploy multiple small applications on our servers that are reachable on the public internet. Some of the services need to communicate to each other, but are deployed on different servers, due to different hardware requirements (the servers are on different network and different IP).
Q: What would be the best way to configure blocking of incoming requests to SERVER:PORT except for some allowed IPs and at the same time allow all outgoing connections of the Docker containers?
Two major things we played with and tried out to get them working:
Bound Docker port mappings to 127.0.0.1 and route every traffic through an nginx. This is really config heavy and some infrastructure components aren't possible to proxy via http(s), so we need to add them to nginx.conf stream-server block and therefore open a port on the server (that is accessible by everyone).
Use iptables to restrict access to the published ports. So something like this: iptables -A INPUT -I DOCKER-USER -p tcp -i eth0 -j DROP. But this also have 2 major downfalls. First it seems that it's quite hard to allow multiple IP adresses in such a construct and on the other hand this approach seems to block our docker outgoing connections (to the internet) as well. E. g.: After we activated it a ping google.com from within a docker container was rejected.
Not sure I get this. In term of design, what is available to the external world is in a DMZ or published through an API gateway.
Your docker swarm/kubernetes cluster shall not be accessible directly through the internet or only the API gateway or the application on the DMZ.
So quite likely your docker server shall not be accessible directly. And even if that is the case, if you don't explicitely export a port to the host/outside of the cluster, it stay restricted to the virtuals networks of docker to allow cross container communication.

Access Docker Container Name By DNS Subdomain

Basically, I have Docker Swarm running on physical machine with public IPv4 address and registered domain, say example.com
Now, I would like to access the running containers from Internet using their names as a subdomain.
For instance, lets say there is a mysql container running with name mysqldb. I would like to be able to access it from Internet by the following DNS:
mysqldb.example.com
Is there any way to achieve this?
I've finally found a so-called solution.
In general it's not possible :)
However, some known ports can be forwarded to a container by using HTTP reverse-proxies such as Nginx, haproxy etc.

Make docker machine available under host name in Windows

I'm trying to make a docker machine available to my Windows by a host name. After creating it like
docker-machine create -d virtualbox mymachine
and setting up a docker container that exposes the port 80, how can I give that docker machine a host name such that I can enter "http://mymachine/" into my browser to load the website? When I change "mymachine" to the actual IP address then it works.
There is an answer to this question but I would like to achieve it without an entry in the hosts file. Is that possible?
You might want to refer to docker documentaion:
https://docs.docker.com/engine/userguide/networking/#exposing-and-publishing-ports
You expose ports using the EXPOSE keyword in the Dockerfile or the
--expose flag to docker run. Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports.
Exposing ports is optional.
You publish ports using the --publish or --publish-all flag to docker
run. This tells Docker which ports to open on the container’s network
interface. When a port is published, it is mapped to an available
high-order port (higher than 30000) on the host machine, unless you
specify the port to map to on the host machine at runtime. You cannot
specify the port to map to on the host machine when you build the
image (in the Dockerfile), because there is no way to guarantee that
the port will be available on the host machine where you run the
image.
I also suggest reviewing the -P flag as it differs from the -p one.
Also i suggest you try "Kitematic" for Windows or Mac, https://kitematic.com/ . It's much simpler (but dont forget to commit after any changes!)
Now concerning the network in your company, it has nothing to do with docker, as long as you're using docker locally on your computer it wont matter what configuration your company set. Even you dont have to change any VM network config in order to expose things to your local host, all comes by default if you're using Vbox ( adapter 1 ==> NAT & adapter 2 ==> host only )
hope this is what you're looking for
If the goal is to keep it as simple as possible for multiple developers, localhost will be your best bet. As long as the ports you're exposing and publishing are available on host, you can just use http://localhost in the browser. If it's a port other than 80/443, just append it like http://localhost:8080.
If you really don't want to go the /etc/hosts or localhost route, you could also purchase a domain and have it route to 127.0.0.1. This article lays out the details a little bit more.
Example:
dave-mbp:~ dave$ traceroute yoogle.com
traceroute to yoogle.com (127.0.0.1), 64 hops max, 52 byte packets
1 localhost (127.0.0.1) 0.742 ms 0.056 ms 0.046 ms
Alternatively, if you don't want to purchase your own domain and all developers are on the same network and you are able to control DHCP/DNS, you can setup your own DNS server to include a private route back to 127.0.0.1. Similar concept to the Public DNS option, but a little more brittle since you might allow your devs to work remote, outside of a controlled network.
Connecting by hostname requires that you go through hostname to IP resolution. That's handled by the hosts file and falls back to DNS. This all happens before you ever touch the docker container, and docker machine itself does not have any external hooks to go out and configure your hosts file or DNS servers.
With newer versions of Docker on windows, you run containers with HyperV and networking automatically maps ports to localhost so you can connect to http://localhost. This won't work with docker-machine since it's spinning up virtualbox VM's without the localhost mapping.
If you don't want to configure your hosts file, DNS, and can't use a newer version of docker, you're left with connecting by IP. What you can do is use a free wildcard DNS service like http://xip.io/ that maps any name you want, along with your IP address, back to that same IP address. This lets you use things like a hostname based reverse proxy to connect to multiple containers inside of docker behind the same port.
One last option is to run your docker host VM with a static IP. Docker-machine doesn't support this directly yet, so you can either rely on luck to keep the same IP from a given range, or use another tool like Vagrant to spin up the docker host VM with a static IP on the laptop. Once you have a static IP, you can modify the host file once, create a DNS entry for every dev, or use the same xip.io URL, to access the containers each time.
If you're on a machine with Multicasting DNS (that's Bonjour on a Mac), then the approach that's worked for me is to fire up an Avahi container in the Docker Machine vbox. This lets me refer to VM services at <docker-machine-vm-name>.local. No editing /etc/hosts, no crazy networking settings.
I use different Virtualbox VMs for different projects for my work, which keeps a nice separation of concerns (prevents port collisions, lets me blow away all the containers and images without affecting my other projects, etc.)
Using docker-compose, I just put an Avahi instance at the top of each project:
version: '2'
services:
avahi:
image: 'enernoclabs/avahi:latest'
network_mode: 'host'
Then if I run a webserver in the VM with a docker container forwarding to port 80, it's just http://machine-name.local in the browser.
You can add a domain name entry in your hosts file :
X.X.X.X mymachine # Replace X.X.X.X by the IP of your docker machine
You could also set up a DNS server on your local network if your app is meant to be reachable from your coworkers at your workplace and if your windows machine is meant to remain up as a server.
that would require to make your VM accessible from local network though, but port forwarding could then be a simple solution if your app is the only webservice running on your windows host. (Note that you could as well set up a linux server to avoid using docker-machine on windows, but you would still have to set up a static IP for this server to ensure that your domain name resolution works).
You could also buy your own domain name (or get a free one) and assign it your docker-machine's IP if you don't have rights to write in your hosts file.
But these solution may not work anymore after some time if app host doesn't have a static IP and if your docker-machine IP changes). Not setting up a static IP doesn't imply it will automatically change though, there should be some persistence if you don't erase the machine to create a new one, but that wouldn't be guaranteed either.
Also note that if you set up a DNS server, you'd have to host it on a device with a static IP as well. Your coworkers would then have to configure their machine to use this one.
I suggest nginx-proxy. This is what I use all the time. It comes in especially handy when you are running different containers that are all supposed to answer to the same port (e.g. multiple web-services).
nginx-proxy runs seperately from your service and listens to docker-events to update it's own configuration. After you spun up your service and query the port nginx-proxy is listening to, you will be redirected to your service. Therefore you either need to start nginx-proxy with the DEFAULT_HOST flag or send the desired host as header param with the request.
As I am running this only with plain docker, I don't know if it works with docker-machine, though.
If you go for this option, you can decide for a certain domain (e.g. .docker) to be completely resolved to localhost. This can be either done company-wide by DNS, locally with hosts file or an intermediate resolver (the specific solution depends on your OS, of course). If you then try to reach http://service1.docker nginx-proxy will route to the container that has then ENV VIRTUAL_HOST=service1.docker. This is really convenient, because it only needs one-time setup and is from then on dynamic.

Docker communication between apps in separate containers

I have been looking everywhere for this answer. To me it seems like an obvious question, however, the answer has eluded me.
My current setup is, I have redis, mongodb and two api servers on the same bridge network. The first server serves as a gateway api that does all the auth, and exposes certain api calls. The backend api is the one that handles all the db interactions and data munging. If I hit the backend (inner) api alone, I am able to see the contents (this api would not be exposed in real production environment). However, if I make the same request from within the gateway api, I am not able to hit the backend (inner) api that is also part of the bridged network I created.
Below is a diagram of the container interactions.
I still use legacy linking, but I'm a little bit familiar with this. I think the problem is that you are trying to hit "localhost" from inside your gateway container. The inner API container cannot be resolved as "localhost" inside of the gateway API container. You are able to hit "localhost:8099" from the host machine or externally because of the port mapping, but none of your other containers will be able to resolve that address/port because they 'think' it's a remote machine.
Here's a way to test what I'm thinking. In your host's shell, run the bridge inspect command shown here. Copy the IP address from Containers.<inner-api-hash>.IPV4. Then open a shell in the gateway container with docker exec -it <gateway-id> /bin/bash and then use curl or wget to see if you can hit that IP address you copied.
If my thinking is correct, you will see that you must use your inner-API node's Docker assigned IP address from the other containers. Amongst other options, you can start containers with a static IP address as shown here.
This is starting to escape the scope of my knowledge, but you can also configure a container DNS. Configure container DNS.

Resources