RADIUS is ignoring request to authentication address.freeradius-server in docker - freeradius

I use the following mirror.enter link description here
I execute the following command.
radtest bob test 127.0.0.1 0 testing123
Output for
Sent Access-Request Id 69 from 0.0.0.0:59466 to 127.0.0.1:1812 length 73
User-Name = "bob"
User-Password = "test"
NAS-IP-Address = 172.17.0.2
NAS-Port = 0
Message-Authenticator = 0x00
Cleartext-Password = "test"
Sent Access-Request Id 69 from 0.0.0.0:59466 to 127.0.0.1:1812 length 73
User-Name = "bob"
User-Password = "test"
NAS-IP-Address = 172.17.0.2
NAS-Port = 0
Message-Authenticator = 0x00
Cleartext-Password = "test"
Sent Access-Request Id 69 from 0.0.0.0:59466 to 127.0.0.1:1812 length 73
User-Name = "bob"
User-Password = "test"
NAS-IP-Address = 172.17.0.2
NAS-Port = 0
Message-Authenticator = 0x00
Cleartext-Password = "test"
(0) No reply from server for ID 69 socket 3
The user entry in etc/raddb/clients.conf
client dockernet {
ipaddr = 172.17.0.0/16
secret = testing123}
The entry in /etc/raddb/mods-config/files/authorize:
bob Cleartext-Password := "test"
Output of
Ignoring request to auth address * port 1812 bound to server default from unknown client 127.0.0.1 port 59466 proto udp
Ready to process requests
Ignoring request to auth address * port 1812 bound to server default from unknown client 127.0.0.1 port 59466 proto udp
Ready to process requests
Ignoring request to auth address * port 1812 bound to server default from unknown client 127.0.0.1 port 59466 proto udp
Ready to process requests
Can anyone help me?What did i do wrong?
Thank you in advance for your help.

Related

freeradius docker Ignoring request to auth address when build through jenkins

I am using freeradius docker for AAA server authentication. For integration tests I am using docker-compose which contains freeradius and other services also. When I build, it creates the containers and test the authentication and after that stops the containers.
From one docker container I am sending request to freeradius docker container for authentication, which is working fine on my local machine but when I am trying to build through jenkins, I am getting
Ignoring request to auth address * port 1812 bound to server default from unknown client 192.168.96.1 port 36096 proto udp
below is my client.conf file -
client dockernet {
ipaddr = x.x.0.0
secret = testing123
netmask = 24
shortname = dockernet
require_message_authenticator = no
}
client jenkins {
ipaddr = 192.168.0.0
secret = testing123
netmask = 24
shortname = jenkins
require_message_authenticator = no
}
Your client subnet/netmask definition is incorrect.
192.168.0.0/24 will match addresses in the subnet 192.168.0.x (192.168.0.0 to 192.168.0.255), but the request is coming from 192.168.96.1.
Either change the jenkins client definition to 192.168.96.0 (leaving the netmask as 24), or use netmask = 16 which will include all addresses from 192.168.0.0 to 192.168.255.255.
I would recommend limiting the range to the exact IP, or as small a range as possible, so therefore suggest the former.

Unable to ping docker host but the world is reachable

I am able to ping rest of the world but not the host from container in which docker container is running. I am sure someone encountered this issue before
See below details
Ubuntu 14.04.2 LTS (GNU/Linux 3.16.0-30-generic x86_64
**Container is using "bridge" network
Docker version 18.06.1-ce, build e68fc7a**
IP address for eth0: 135.25.87.162
IP address for eth1: 192.168.122.55
IP address for eth2: 135.21.171.209
IP address for docker0: 172.17.42.1
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 135.21.248.1 0.0.0.0 UG 0 0 0 eth1
135.21.171.192 * 255.255.255.192 U 0 0 0 eth2
135.21.248.0 * 255.255.255.0 U 0 0 0 eth1
135.25.87.128 * 255.255.255.192 U 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.122.0 * 255.255.255.192 U 0 0 0 eth1
#ping commands from container
# ping google.com
PING google.com (64.233.177.113): 56 data bytes
64 bytes from 64.233.177.113: icmp_seq=0 ttl=29 time=51.827 ms
64 bytes from 64.233.177.113: icmp_seq=1 ttl=29 time=50.184 ms
64 bytes from 64.233.177.113: icmp_seq=2 ttl=29 time=50.991 ms
^C--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 50.184/51.001/51.827/0.671 ms
# ping 135.25.87.162
PING 135.25.87.162 (135.25.87.162): 56 data bytes
^C--- 135.25.87.162 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
root#9ed17e4c2ee3:/opt/app/tomcat#

Which IPv4 is used while running ping *hostname* command

If I have a hostname that has several IPv4 addresses assigned.
Which IPv4 will be used by ping request to resolve the hostname address [for example, while running "ping Some-Pc"]?
Run the command 'route' in Linux and you will see the routing tables. Based on the destination address and the routing table you should be able to determine the interface being used to send the ICMP messages and thus the src IP address.
For example, given this routing table in Linux:
[mynode]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.56.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
192.168.124.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
If you send a ping to address 10.0.2.45, it will use enp0s3 and the corresponding IP address as src address.
If you send a ping to address 172.17.0.0 it will send the address from NIC docker0 and the corresponding src IP address.
With ifconfig in Linux (ipconfig in Windows) you can see the IP address assigned to each interface.

Docker Container to Container communication with IPv6 only

I am running two VM on OpenStack Mirantis. For Simplicity let's call host-1 and host-2. I am unable to communicate neither from Container to Container on different hosts not Container to Public Internet On each Host I have installed Docker ver 1.12.3 and run the following things --
tee Dockerfile <<-'EOF'
FROM centos
RUN yum -y install net-tools bind-utils iputils*
EOF
Later --
docker build -t crazy:3 .
On host-1 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
On host-2 :--
dockerd --ipv6 --fixed-cidr-v6="2001:1b76:2400:e2::2/64" &
run -i -t --entrypoint /bin/bash crazy:3
ping6 -c3 google.com
ifconfig
Host-1 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:2 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 18 bytes 1663 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53 bytes 4604 (4.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Host-2 output:--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 2001:1b76:2400:e2:0:242:ac11:3 prefixlen 64 scopeid 0x0<global>
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 8 bytes 808 (808.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 508 (508.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Then again
On host-1:--
ping6 2001:1b76:2400:e2:0:242:ac11:3
On host-2:--
ping6 2001:1b76:2400:e2:0:242:ac11:2
All are same output i,e --
PING 2001:1b76:2400:e2:0:242:ac11:3(2001:1b76:2400:e2:0:242:ac11:3) 56 data bytes
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=1 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=2 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=3 Destination unreachable: Address unreachable
From 2001:1b76:2400:e2:0:242:ac11:2 icmp_seq=4 Destination unreachable: Address unreachable
Both hosts ip route are same i,e --
2001:1b76:2400:e2:f816:3eff:fe69:c2f2 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256 expires 28133sec
2001:1b76:2400:e2::/64 dev docker0 proto kernel metric 256
2001:1b76:2400:e2::/64 dev docker0 metric 1024
fe80::/64 dev eth0 proto kernel metric 256
fe80::/64 dev docker0 proto kernel metric 256
Both containers ip route are same i,e --
2001:1b76:2400:e2::1 dev eth0 metric 0
cache
2001:1b76:2400:e2::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
default via 2001:1b76:2400:e2::1 dev eth0 metric 1024
Both hosts ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 1
Both containers ip forwarding are same i,e --
net.ipv4.conf.all.forwarding = 1
net.ipv6.conf.all.forwarding = 0

Responses from kubernetes containers getting lost

I have installed kubernetes on openstack. The setup has one master and one node on coreos.
I have a pod hosting an SIP application on UDP port 5060 and I have created service as NODEPORT on 5060.
The spec:
"spec": {
"ports": [
{
"port": 5061,
"protocol": "UDP",
"targetPort": 5060,
"nodeport": 5060,
"name": "sipu"
}
],
"selector": {
"app": "opensips"
},
"type": "NodePort"
}
IPs
Public IP of node: 192.168.144.29.
Private IP of node: 10.0.1.215..
IP of the container:10.244.2.4.
docker0 interface: 10.244.2.1.
Now, the problem.
I send a SIP request to the application and expect a 200 OK response, which I am not getting.
To trace the same, I installed TCPDUMP on the container and the node.
On the container, I can see the request and response packet captured while on the node itself I just see the request packet. Don't understand why the packet is getting lost.
Below is tcpdump of container:
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1514 bytes
06:12:20.391171 IP (tos 0x0, ttl 64, id 13372, offset 0, flags [DF], proto UDP (17), length 1010)
10.244.2.1.50867 > 10.244.2.4.5060: [bad udp cksum 0x1ddc -> 0xe738!] SIP, length: 982
PUBLISH sip:service-1#opensipstest.org SIP/2.0
Via: SIP/2.0/UDP 192.168.144.10:5060;branch=z9hG4bK-5672-1-0
Max-Forwards: 20
From: service <sip:service-1#opensipstest.org>;tag=1
To: sut <sip:service-1#opensipstest.org>
Call-ID: 1-5672#192.168.144.10
CSeq: 1 PUBLISH
Expires: 3600
Event: presence
Content-Length: 607
User-Agent: Sipp v1.1-TLS, version 20061124
<?xml version="1.0"?>
<deleted presence xml to reduce size>
06:12:20.401126 IP (tos 0x10, ttl 64, id 11888, offset 0, flags [DF], proto UDP (17), length 427)
10.244.2.4.5060 > 10.244.2.1.5060: [bad udp cksum 0x1b95 -> 0xeddc!] SIP, length: 399
SIP/2.0 200 OK
Via: SIP/2.0/UDP 192.168.144.10:5060;received=10.244.2.1;branch=z9hG4bK-5672-1-0
From: service <sip:service-1#opensipstest.org>;tag=1
To: sut <sip:service-1#opensipstest.org>;tag=332ff20b76febdd3c55f313f3efc6bb8-ca08
Call-ID: 1-5672#192.168.144.10
CSeq: 1 PUBLISH
Expires: 3600
SIP-ETag: a.1450478491.39.2.0
Server: OpenSIPS (1.8.4-notls (x86_64/linux))
Content-Length: 0
06:12:20.401364 IP (tos 0x0, ttl 64, id 13374, offset 0, flags [DF], proto UDP (17), length 427)
10.244.2.1.58836 > 10.244.2.4.5060: [bad udp cksum 0x1b95 -> 0x1bcc!] SIP, length: 399
SIP/2.0 200 OK
Via: SIP/2.0/UDP 192.168.144.10:5060;received=10.244.2.1;branch=z9hG4bK-5672-1-0
From: service <sip:service-1#opensipstest.org>;tag=1
To: sut <sip:service-1#opensipstest.org>;tag=332ff20b76febdd3c55f313f3efc6bb8-ca08
Call-ID: 1-5672#192.168.144.10
CSeq: 1 PUBLISH
Expires: 3600
SIP-ETag: a.1450478491.39.2.0
Server: OpenSIPS (1.8.4-notls (x86_64/linux))
Content-Length: 0
And tcpdump of node:
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1514 bytes
06:12:20.390772 IP (tos 0x0, ttl 127, id 20196, offset 0, flags [none], proto UDP (17), length 1010)
192.168.144.10.5060 > 10.0.1.215.5060: [udp sum ok] SIP, length: 982
PUBLISH sip:service-1#opensipstest.org SIP/2.0
Via: SIP/2.0/UDP 192.168.144.10:5060;branch=z9hG4bK-5672-1-0
Max-Forwards: 20
From: service <sip:service-1#opensipstest.org>;tag=1
To: sut <sip:service-1#opensipstest.org>
Call-ID: 1-5672#192.168.144.10
CSeq: 1 PUBLISH
Expires: 3600
Event: presence
Content-Length: 607
User-Agent: Sipp v1.1-TLS, version 20061124
<?xml version="1.0"?>
<presence xmlns="urn:ietf:params:xml:ns:pidf" />
Nodeport rules from Iptable
Chain KUBE-NODEPORT-CONTAINER (1 references)
pkts bytes target prot opt in out source destination
12 8622 REDIRECT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/opensips:sipu */ udp dpt:5060 redir ports 40482
3 95 REDIRECT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/my-udp-service: */ udp dpt:6000 redir ports 47497
Chain KUBE-NODEPORT-HOST (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/opensips:sipu */ udp dpt:5060 to:10.0.1.215:40482
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/my-udp-service: */ udp dpt:6000 to:10.0.1.215:47497
I am happy to share more info if required. I have tried to dig in my capacity, but now I don't know what to look therefore requesting some help here.
EDIT
I did the same test on TCP. On TCP, it works as expected.
Thanks
5060 is outside the default Service Node Port Range:
http://kubernetes.github.io/docs/user-guide/services/#type-nodeport
Creation of the service should have failed.
You can try a different port, or create a cluster using a different range, by specifying --service-node-port-range on kube-apiserver.
https://github.com/kubernetes/kubernetes/blob/43792754d89feb80edd846ba3b40297f2a105e14/cmd/kube-apiserver/app/options/options.go#L232
You should also check whether the response can be seen from other nodes. There are issues with "hairpin mode" when communicating within the same node.
Also, when reporting problems, please specify the Kubernetes release.

Resources