avahi Segregated mDNS domains from one multi-homed host - avahi

We're attempting to enable a number of mDNS advertised services on our campus
wide wireless network, most notably airplay. In our case, the airServers would
sit on our wired network, so we need to advertise the services manually either
with DNS-SD or mDNS on the wireless side. We've gotten that working using
static service advertisements in avahi and it's pretty slick, but we have a
scaling problem.
We have potentially 150 AirServer hosts in a variety of classrooms around the
campus. If we were to enable all of them, the list to choose from on iPads
would be outrageously large (to say nothing of students thoroughly enjoying
taking over an AirServer from across campus when a faculty member forgets to
change the password).
What we would like to do is segregate our wireless network on a single vlan per
building basis to form 27 mDNS segments and then run avahi to advertise the
services in each segment, preferably on a single, multi homed host with access
to all of the segments.
I was hoping that avahi-daemon would take a parameter in the avahi-daemon.conf
that points to a unique services directory, so that I could have multiple
config files, each with a different allow-interfaces clause and a pointer to a
different services directory, but that doesn't appear to be a configurable
option.
I was thinking of chroot jailing multiple copies of avahi, but that seems
really kludgy.
Am I missing some more obvious strategy to handle this without creating 27
separate hosts?
Thanks much!
JD

It is possible to achieve what you want if you build your own application for publishing the services in the interfaces you want. This method call is from GNUStep "base" framework, class GSAvahiNetServices (can be used on Linux) and the method is based on the Avahi API.
- (id) initWithDomain: (NSString*)domain
type: (NSString*)type
name: (NSString*)name
port: (NSInteger)port
avahiIfIndex: (AvahiIfIndex)anIfIndex
avahiProtocol: (AvahiProtocol)aProtocol
As you can see it is possible to specify the network interface index you want the service to be published on. You can also limit the protocol (IPv4 or IPv6). If you want one service to be available in more then one interface, just publish it in each interface.

Related

microservices & service discovery with random ports

My question is related to microservices & service discovery of a service which is spread between several hosts.
The setup is as follows:
2 docker hosts (host A & host B)
a Consul server (service discovery)
Let’s say that I have 2 services:
service A
service B
Service B is deployed 10 times (with random ports): 5 times on host A and 5 times on host B.
When service A communicates with service B, for example, it sends a request to serviceB.example.com (hard coded).
In order to get an IP and a port, service A should query the Consul server for an SRV record.
It will get 10 ip:port pairs, for which the client should apply some load-balancing logic.
Is there a simpler way to handle this without me developing a client resolver (+LB) library for that matter ?
Is there anything like that already implemented somewhere ?
Am I doing it all wrong ?
There are a few options:
Load balance on client as you suggest for which you'll either need to find a ready-build service discovery library that works with SRV records and handles load balancing and circuit breaking. Another answer suggested Netflix' ribbon which I have not used and will only be interesting if you are on JVM. Note that if you are building your own, you might find it simpler to just use Consul's HTTP API for discovering services than DNS SRV records. That way you can "watch" for changes too rather than caching the list and letting it get stale.
If you don't want to reinvent that particular wheel, another popular and simple option is to use a HAProxy instance as the load balancer. You can integrate it with consul via consul-template which will automatically watch for new/failed instances of your services and update LB config. HAProxy then provides robust load balancing and health checking with a lot of options (http/tcp, different balancing algorithms, etc). One possible setup is to have a local HAProxy instance on each docker host and a fixed port assigned statically to each logical service (can store it in Consul KV) so you connect to localhost:1234 for service A for example and localhost:2345 for service B. Local instance means you don't pay for extra round trip to loadbalancer instance then to the actual service instance but this might not be an issue for you.
I suggest you to check out Kontena. It will solve this kind of problem out of the box. Every service will have an internal DNS that you can use in communication between services. Kontena has also built-in load balancer that is very easy to use making it very easy to create and scale micro services.
There are also lot's of built-in features that will help developing containerized applications, like private image registry, VPN access to running services, secrets management, stateful services etc.
Kontena is open source project and the code is visible on Github
If you look for a minimal setup, you can wrap the values you receive from Consul via ribbon, Netflix' client based load balancer.
You will find it as a module for Spring Cloud.
I didn't find an up-to-date standalone example, only this link to chrisgray's dropwizard-consul implementation that is using it in a Dropwizard context. But it might serve as a starting point for you.

Is there a way to disconnect or sandbox an instance network interface

I am looking for how I can take an existing instance and either change its network "connection" to a sandboxed network (which is easy enough to create since each project supports up to 5 networks) or start the instance with no network interface at all and just use console access. Alternatively, what is the recommended process for doing forensic investigation into an instance that is suspected to be running processes or services that should not be communicating with other instances in the project or any external address? Thanks in advance.
You can leave instances without a public IP address. Instances created this way will not accessible by machines outside your project.
Have a look at the documentation concerning IPs.
You may also need to set up a NAT gateway so instances can communicate with ouside machines.
You can use forwarding rules to discard packets from/to an instance in combination with routing.

Should I register my multiplayer game's port with the IANA?

I have a piece of multiplayer game software which is approaching maturity and will hopefully be in a public testing phase soon. For informal private tests, I've been using a port number that I'm fond of, which falls in the User Port range, 1024-49151. I'm wondering if it will behoove me to register a port with the IANA (in this case, I can't use my current port because it's already used by a very obscure service).
I'm a bit puzzled at the fact that we are told not to utilize User Ports without registering them, and yet most major multiplayer games (e.g. Call of Duty, Team Fortress 2, Minecraft) use numbers in this range with no registration. Are games not considered to be a "significant" use of this range, warranting registration? Should I avoid this issue altogether and pick a number from the Dynamic (Ephemeral) range, 49152-65535? I just wonder why most games avoid this upper range if it obviates the need for IANA registration (fear of collision with a temporary port?). Or needn't I worry about registration at all? I'm just trying to be a responsible netizen as I prepare to release my first networking application. Thanks.
If a specific port must be opened on each individual client *, you need to register a port in the User Ports range, because of of RFC 6335, 8.1.2.:
[...] application software MUST NOT assume that a specific port number in the Dynamic Ports range will always be available for communication at all times [...]
On your server (if any) you can use any port, without registration. However I'd recommend using a port in the Dynamic Ports range there as well. Your clients could then fetch a list of servers and their current port numbers from some kind of master server (for example via HTTP / Port 80 or HTTPS / Port 443). That way you 1. eliminated mis-used User Ports and 2. can change your actual server ports at any time.
*: If your players are behind a NAT, client-side ports have to be forwarded in the NAT settings; that will make your game hard to play for inexperienced users. It is probably a better idea to redirect all traffic through your server(s)...
If you use a server somewhere and you really need a client-side protocol you can circumvent the issue of registering with the IANA by opening a random dynamic port instead and notifying the server of that port, which then notifies all clients wanting to connect. That way, you don't need any user port at all, and thus no registration. But this makes it even harder for users behind a NAT.

Network protocol for surviving client IP address/network changes, among other problems

Persistent connection to a mobile device is difficult. Signal conditions can change rapidly, and connectivity types can also change. For instance, I may want to stream audio to my phone as I leave my apartment (WiFi), take a bus (WiMax/LTE), transfer to the subway (intermittent CDMA, sometimes roaming on another carrier), and walk to work (WiMax/LTE and back to WiFi). On this 15-minute trip alone I use at least 4 different IP addresses/networks, and experience all sorts of connectivity issues along the way. However, there is rarely a total loss of connectivity to the Internet, and the times that the signal condition makes connectivity problematic only happen for small periods of time.
I'm looking for a protocol that allows roaming from network to network and is very tolerant of harsh network conditions, while maintaining virtual end-to-end connectivity. This protocol would enable connections between a (usually) mobile device and some sort of proxy server which would relay regular TCP/UDP connections on behalf of the mobile device, over this tolerant protocol.
This protocol would sit around layer 3, and maybe even enable creation of virtual network interfaces that are tunneled through it. Perhaps there is a VPN or SOCKS proxy solution that already meets these needs.
Does such a protocol already exist?
If not, I'm probably going to come up with one, but would rather piggy-back off of existing efforts first.
There are many efforts within the internetworking community to address precisely these "network mobility" concerns.
In particular, Mobile IP (and its IPv6 big sister, Proxy Mobile IPv6) is a broad term for efforts to make IP addresses themselves portable across networks, however I doubt these technologies have reached sufficient maturation/deployment for production use today.
To undertake such mobility without support from the network requires a means of the host announcing to you its new address in an authenticated manner; this is what the Host Identity Protocol is designed for, but it is still at the "experimental" stage of the RFC process. From the abstract of RFC 5201:
HIP allows consenting hosts to securely establish and maintain shared
IP-layer state, allowing separation of the identifier and locator
roles of IP addresses, thereby enabling continuity of communications
across IP address changes.
There are several open-source implementations that are known to interoperate. Without claiming that this is a complete list, nor vouching for any of them (they're just a few picked off a Google search for "Host Identity Protocol implementations"), there is:
OpenHIP for multiple operating systems;
HIPL for Linux;
cutehip for Java;
HIP for inter.net for *BSD/Linux.

Point to point network connection through firewalls

I would like to setup a network connection (RTP or UDP) between two computers at different locations, each of which is behind a NAT modem/firewall. I do not want any modification of the firewalls.
My working assumption is that I need a bot somewhere that both computers can reach (eg a shell account on an internet server). Each computer connects out to the bot and the bot allows the two computers to update and query status and to exchange data.
This is ok as far as it goes, but it means that all data travels via the bot. Is there a way I can connect the two computers without the bot, or failing that, allow the bot to drop out of the data exchange once a connection has been setup? My feeling is that there is no way to do this, but my TCP/IP is a bit rusty...
If you assume nothing on the NAT/Firewall your are correct.
Hole-Punching for example will not work with overloaded NAT (PAT) as far as I know, because the source port is randomized by the NAT device, and it maps/match both the destination public address and the picked up source port elected.
UPnP may work, but again you need to assume it exist and enabled on the NAT device.
As I see it, you got only two options if you want to be generic:
1. Configure the NAT.
2. Use a proxy (the bot you mentioned).
Skype for example uses the second, but does it in a distributed manner by using every Skype client as a potential proxy (probably only if it detects it is not behind a NAT or not limited by it).

Resources