Multiple secrets per IP for radius client - freeradius

I have various routers which connect to my radius system, their IP might change as it's all over the net via NAT networks..
Each router has a unique secret. All known in a local DB
Now I'm looking to set radius up in a way that every router can authenticate with their custom secret, via whatever IP, but freeradius won't let me configure it like this:
client 0.0.0.0/0 {
shortname = test1
secret = AA
}
client 0.0.0.0/0 {
shortname = test2
secret = AB
}
Is there a way so I can either disable the freeradius IP check (and check on secret only?), or to force a client (/ip) to have more than 1 secret ?
Reconfigurating the secret to all the same is unfortunately not possible, as some routers already have been deployed with very limited network access.
I'm using:
freeradius: FreeRADIUS Version 2.2.8, for host x86_64-pc-linux-gnu, built on Apr 5 2016 at 13:40:43
Copyright (C) 1999-2015 The FreeRADIUS server project and contributors.

Why what you want doesn't work with RADIUS/FreeRADIUS alone
FreeRADIUS does the packet to client matching before the packet is decoded. Decoding the packet before performing the matching makes DoS attacks against the server easier, as spurious requests cause the server to use more CPU time.
Ignoring the secret isn't an option either. The secret is used by the Access point to decode protected attributes like the MPPE key attributes for WPA2 Enterprise.
For walled gardens with PAP the shared secret is used to encrypt the cleartext password provided by the user, so if you don't know the shared secret you can't get the plaintext value back for validation.
In v4.0.x the plan is to send packets from unknown IP addresses down to the worker threads for processing. At this point the worker would have full knowledge of all attributes, and could bind a secret to the IP address using that additional information.
It still wouldn't let you map incoming packets using RADIUS attributes, but you're unlikely to see a conflict where two APs swap their WAN IPs... Apart from possibly in a CGN environment with a small public pool.
Available options
Use RADSEC (DTLS/TLS). Clients are validated using a certificate, and the shared secret is fixed.
Establish a tunnel (L2TP/IPSEC/PPP) between the AP and your RADIUS server. Use that to provide consistent local IP addresses.
Run the APs dual stack, hopefully bypassing the NAT. Some ISPs have begun offering IPv6 connectivity with prefix delegation. In these cases, all devices behind the CPE get publicly accessible IPv6 addresses. The prefix may change occasionally, so you'd need to use some sort of 'phone home' system, where the AP could inform your servers of its public address.
Develop your own software, which snoops on incoming RADIUS packets, and checks to see if the IP matches one in your client database. If it doesn't, it tries all the shared secrets to see if any can be used to validate the packet contents, and then updates the IP binding for that client. This might work best with your set of restrictions.
I think there's also some patches out there for v2.x.x which allow dynamic clients to be created using decoded attributes, but v2.x.x was EOL'd a while back, and they're not officially supported.

Related

How to use a redirect 127.0.0.1 to docker machine IP

I'm setting up a docker-compose environment for a web application which responds to a wildcard subdomain. On development, we simply use the great lvh.me domain (which resolves to 127.0.0.1) and it works for all subdomains without any extra configuration, eg:
app.lvh.me:3000
app2.lvh.me:3000
app-watherever.lvh.me:3000
My question is how to set up a custom domain (let's say app.local) and all its subdomains to resolve to the docker-machine ip.
Note: I don't want to use the etc/hosts as it'll require to add each subdomain individually to the file.
You can use one of your domains (or buy a cheap one) with a DNS hosting service.
If your docker machine's ip is dynamic, you'll need a hosting service that supports Dynamic DNS, or a regular DNS service with for an A record pointing to the static ip.
This isn't actually particularly related to Docker.
Be aware that doing this with a '.local' domain suffix will prevent you from getting a TLS certificate from a publicly-trusted authority, and will cause you to have to modify your environment to have to trust a local certificate authority, which then issues certificates to your docker-machine's hosted domain. If you have an IT department, I strongly recommend you speak to them; if they deny the request, talk to your manageer and and have them push for it.
The basic steps are as follows:
Set up a local DNS server that you can point your resolver to.
Configure it to forward queries that it can't itself answer to your current DNS servers.
Set up an "app.local" zone that contains a few records. (In the SOA record, your email address should be written with a dot instead of an # symbol, and with a trailing dot. All full DNS names written in here must have a trailing dot. And also, swap '127.19.20.201' for whatever your docker-machine IP is):
app.local. IN SOA 60 yourmachinename your.email.address. (
2019102100 ; serial number: YYYYMMDDss where ss is sequence
3600 ; refresh interval, not important here
1800 ; retry interval, not important here
3600 ; record expiration, not important here
60 ; minimum record time-to-live
)
app.local. IN A 60 172.19.20.201
*.app.local. IN A 60 172.19.20.201
Change your OS resolver to point to that local DNS server. (If you're on Windows, this operation requires elevation. Otherwise, if you're using bind, it's in /etc/resolv.conf.)
If you need to do this on more than just your own machine, talk to your IT department. They are the ones with the power (though not necessarily the willingness, unless your manager pressures them into it) to set this zone up for all the machines which use their DNS server. Business justifications ('we need it to behave like our main app, but on a different host and domain name') trump ideological correctness (though '.local' as a TLD is reserved in the DNS RFCs), though they cannot trump protocol mandates.
(If you need a TLS certificate for this development -- and you really probably do! -- you should also talk to your IT department. They can get a free wildcard certificate from LetsEncrypt by verifying control of the DNS zone. Learning how that's done can take between 15 and 30 minutes, and setting up the environment can take 15 minutes to 6 hours depending on the DNS provider in use.)

Routing to same instance of Backend container that serviced initial request

We have a multiservice architecture consisting of HAProxy front end ( we can change this to another proxy if required), a mongodb database, and multiple instances of a backend app running under Docker Swarm.
Once an initial request is routed to an instance ( container ) of the backend app we would like all future requests from mobile clients to be routed to the same instance. The backend app uses TCP sockets to communicate with a VoIP PBX.
Ideally we would like to control the number of instances of the backend app using the replicas key in the docker-compose file. However if a container died and was recreated we would require mobile clients continue routing to the same container. The reason for this is each container is holding state info.
Is this possible with Docker swarm? We are thinking each instance of the backend app when created gets an identifier which is then used to do some sort of path based routing.
HAproxy has what you need. This article explains all.
As a conclusion of the article, you may choose from two solutions:
IP source affinity to server and Application layer persistence. The latter solution is stronger/better than the first but it requires cookies.
Here is an extras from the article:
IP source affinity to server
An easy way to maintain affinity between a user and a server is to use user’s IP address: this is called Source IP affinity.
There are a lot of issues doing that and I’m not going to detail them right now (TODO++: an other article to write).
The only thing you have to know is that source IP affinity is the latest method to use when you want to “stick” a user to a server.
Well, it’s true that it will solve our issue as long as the user use a single IP address or he never change his IP address during the session.
Application layer persistence
Since a web application server has to identify each users individually, to avoid serving content from a user to an other one, we may use this information, or at least try to reproduce the same behavior in the load-balancer to maintain persistence between a user and a server.
The information we’ll use is the Session Cookie, either set by the load-balancer itself or using one set up by the application server.
What is the difference between Persistence and Affinity
Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server
Persistence: this is when we use Application layer information to stick a client to a single server
sticky session: a sticky session is a session maintained by persistence
The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable, so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…
Affinity configuration in HAProxy / Aloha load-balancer
The configuration below shows how to do affinity within HAProxy, based on client IP information:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance source
hash-type consistent # optional
server s1 192.168.10.11:80 check
server s2 192.168.10.21:80 check
Session cookie setup by the Load-Balancer
The configuration below shows how to configure HAProxy / Aloha load balancer to inject a cookie in the client browser:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance roundrobin
cookie SERVERID insert indirect nocache
server s1 192.168.10.11:80 check cookie s1
server s2 192.168.10.21:80 check cookie s2

In the MQTT protocol, how does a client identify a server?

I have read that the overhead is low. What does it really mean when compared to HTTP? Does it not use the ip address of the server to which a client tries to connect to? If not, how does a client connect to a server?
Low overhead means that for a given size of messages there is very little extra information sent. It has nothing to do with broker discovery.
E.g. for a HTTP message there us a relatively large amount of HTTP Headers sent before any of the message is transmitted.
The client will connect to the broker via it's IP address. This can either be known in advance, looked up from a host name via DNS or looked up via a TXT record in the DNS for a given domain. You can see examples of broker discovery on the mqtt.org site here

View plain response (from HTTPS) in WireShark

I couldn't find exact answer.
In similar topics, people say that without Private key you can't view HTTPS response, but I am surprised, why key needed at all? For example, when browser requests https://example.com, it can read view it's html output.
And I want the same in WireShark (one of my program reads response from https://example.com and want to view just that page's outputed HTML). However, I can't understand why Private key is needed with this simple task?
If you didn't need to know the private key, an attacker wouldn't need it either – then any HTTPS traffic including login information, credit card numbers, photos, etc could be read by anybody that is on the same network as you (somebody listening to wi-fi traffic), or anywhere between you and the server (ISPs). This would be a disaster.
HTTPS (or more specifically TLS) was created for this purpose – to be able to communicate with remote parties securely without having complete trust in every single node on the way to the remote party. It relies on public-key cryptography, which makes it so that it is easy to encrypt messages with the public key, but extremely difficult (or practically impossible) to reverse the encryption without knowing the private key.
A browser which communicates with a server via HTTPS creates a link based on keys exchanged securely. Only the server and the browser know these keys, and so only the server and the browser can send and receive messages to each other.
Wireshark, even if it is running on your computer, is not running as a part of your browser and hence does not know the keys that the server and the browser agreed on. So it is impossible for it to read the traffic.
It may be somewhat surprising to know that even if somebody (Wireshark) can read all the data your browser exchanges with a server, it will not know the keys that the browser and server agreed on.
Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical channel, such as paper key lists transported by a trusted courier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure channel. This key can then be used to encrypt subsequent communications using a symmetric key cipher.
Diffie-Hellman key exchange, Wikipedia

Is there a difference between ftp.mysite.com and using an IP directly?

Lets say I use DNS to configure ftp.mysite.com to my site's IP, I want to give clients the credentials to use the ftp site. Can I give them the URL (ftp.mysite.com) OR should I give then the IP directly (even though the URL points to that IP).
Am I risking compatibility issues of some sort?
Do not use an IP address, always use a domain name. A domain name is less likely to change and carries more information than an IP address.
While a domain name is indeed just an alias to an IP address, a single IP address can be used for multiple domains. This is common with virtual hostings.
In this case, an IP address may not carry enough information. This more common with HTTP, where a domain name, that is otherwise lost in domain-to-IP resolution, is provided to an HTTP server using Host: HTTP header.
FTP protocol has a similar mechanics, the HOST command. But as that command was introduced relatively recently, it is actually quite rare that an FTP server relies on this. Even on shared hostings, a domain is usually included in an FTP username to allow even FTP clients, that do not (yet) support the HOST command.
See also Do the SSH or FTP protocols tell the server to which domain I am trying to connect?
there is no deference. you can give either you IP or your domain name. once people have the domain, they can get your IP very easy.
the domain can be better choice in case the IP is going to be changed.
Most FTP servers are hosted on port 21 (or 22 for SFTP).
ftp.mysite.com usually points to localhost:21 or localhost:22
So there is no difference, except for the ports.

Resources