freeradius docker Ignoring request to auth address when build through jenkins - jenkins

I am using freeradius docker for AAA server authentication. For integration tests I am using docker-compose which contains freeradius and other services also. When I build, it creates the containers and test the authentication and after that stops the containers.
From one docker container I am sending request to freeradius docker container for authentication, which is working fine on my local machine but when I am trying to build through jenkins, I am getting
Ignoring request to auth address * port 1812 bound to server default from unknown client 192.168.96.1 port 36096 proto udp
below is my client.conf file -
client dockernet {
ipaddr = x.x.0.0
secret = testing123
netmask = 24
shortname = dockernet
require_message_authenticator = no
}
client jenkins {
ipaddr = 192.168.0.0
secret = testing123
netmask = 24
shortname = jenkins
require_message_authenticator = no
}

Your client subnet/netmask definition is incorrect.
192.168.0.0/24 will match addresses in the subnet 192.168.0.x (192.168.0.0 to 192.168.0.255), but the request is coming from 192.168.96.1.
Either change the jenkins client definition to 192.168.96.0 (leaving the netmask as 24), or use netmask = 16 which will include all addresses from 192.168.0.0 to 192.168.255.255.
I would recommend limiting the range to the exact IP, or as small a range as possible, so therefore suggest the former.

Related

Why CURL didn't work from inside docker container?

I have 2 services in Docker (each service has its own docker-compose.yml, nginx + php-fpm).
Service #1 on port 48801.
Service #2 on port 48802.
My server IP 99.99.99.44 (CentOS 8).
I make CURL request (via PHP) from inside Service #1 to Service #2 (i.e. to 99.99.99.44:48802). But I get next error:
Failed to connect to 99.99.99.44 port 48802 after 1017 ms: Host is unreachable
There is a problem with my server. I need a help (or direction).
Some info.
On other server this services work fine.
Request from inside container to port 80 of this server works fine.
Request from host (not from inside container) to custom port 48802 works fine.
All services available from browser (via custom ports).
SELinux disabled.
Firewalld disabled.
My ip route result:
default via 99.99.99.1 dev eno1 proto static metric 100
99.99.99.1 dev eno1 proto static scope link metric 100
172.18.0.0/16 dev br-2f405adcc89d proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-19c596fe7618 proto kernel scope link src 172.19.0.1

Strongswan IPSEC IKE with Docker Network Subnet

I would like to setup strongswan on my DockerHost in order to allow containers on the leftSubnet which is a docker network subnet to communicate with my rightSubnet in the IPSEC TUNNEL.
10.0.10.0/24 which is my leftSubnet on DockerHost was created using:
docker network create --subnet 10.0.10.0/24
IPSEC IKE Configuration on DockerHost:
conn VPN-DOCKERHOST-REMOTE
authby=secret #this specifies how the connection is authenticated
auto=start #start the connection by default
type=tunnel #the type of connection
left=1.1.1.1 #This is the public ip address of server MAESTRIA
leftsubnet=10.0.10.0/24 #This is the subnet/private ip of server MAESTRIA
right=2.2.2.2 #This is the public ip address of server RESAMUT/remote server
rightsubnet=10.1.1.0/24 #This is the subnet/private ip of server RESAMUT
ike=aes128-sha256-modp3072 #Internet key exchange, type of encryption keyexchange=ikev2 #Internet key exchange version
ikelifetime=28800s #Time before re authentication of keys
esp=aes128-sha256 #Encapsulation security suite of protocols
IPSEC IKE is Up between my DockerHost and the RemoteServer, but I can't ping from my containers to the remote subnet.
I think trafics that match the remote subnet from my container are routed outside of the tunnel because of the iptables or something like that but I can't figure out the problem.

Change Nomad bind port

In HashiCorp Nomad, is it possible to specify the bind port for a Server and Client (e.g., from 464{6,7,8} to 600{6,7.8}?
The addresses stanza does not allow port specification, and neither does the -bind switch.
The advertise stanza does not change the port on which Nomad binds.
The -bind switch allows specifying IP only:
> nomad agent -server -data-dir=/tmp/nomad -bind=0.0.0.0
==> No configuration files loaded
==> Starting Nomad agent...
==> Nomad agent configuration:
Advertise Addrs: HTTP: <HOSTIP>:4646; RPC: <HOSTIP>:4647; Serf: <HOSTIP>:4648
Bind Addrs: HTTP: 0.0.0.0:4646; RPC: 0.0.0.0:4647; Serf: 0.0.0.0:4648
Attempting to specify a port errors out:
> nomad agent -server -data-dir=/tmp/nomad -bind=0.0.0.0:6000
==> Failed to parse HTTP advertise address (, 0.0.0.0:6000, 4646, false): Error resolving bind address "0.0.0.0:6000": lookup 0.0.0.0:6000: no such host
Use the ports stanza to select ports for the agent, whether in server or client mode.
you should look on hashicorp website how to launch nomad with server.hcl client.hcl and nomad.hcl file
:
echo 'server {
enabled = true
bootstrap_expect = 1
}' > /etc/nomad.d/server.hcl
echo 'region = "global"
datacenter = "dc1"
data_dir = "/opt/nomad"
bind_addr = "1.2.3.4" # the default
advertise {
# Defaults to the first private IP address.
http = "1.2.3.4"
rpc = "1.2.3.4" #put your adresse here
serf = "1.2.3.4:5648" # non-default ports may be specified
}
' > /etc/nomad.d/nomad.hcl
and run this :
nomad agent -config /etc/nomad.d

How should I setup Traefik on ECS?

 In Short
I've managed to run Traefik locally and on AWS ECS but now I'm wondering how should I setup some sort of load balancing to make my two services with random IPs available to the public.
My current setup on ECS
[Internet]
|
[Load balancer on port 443 + ALB Security group on 443]
|
[Target group on port 443 + Security group from *any* port]
|
[cluster]
|
[service1 container ports "0:5000"]
While this works, I'd now like to add another container, eg. service2 also with random ports eg 0:8000. And that's why I need something like Traefik.
What I did
Here's Toml file:
[api]
address = ":8080"
[ecs]
clusters = ["my-cluster"]
watch = true
domain = "mydomain.com"
region = "eu-central-1"
accessKeyID = "AKIA..."
secretAccessKey = "..."
Also I've added the host entry in /etc/hosts:
127.0.0.1 service1.mydomain.com
127.0.0.1 service2.mydomain.com
And the relative labels on the containers and I can curl service1.mydomain.com/status and get a 200.
Now my last bit is just the following question:
How should publish all this to the internet? AWS ALB? AWS Network LB? Network Bridge/host/other?
AWS ALB vs AWS Network LB depends on who do you want to handle SSL.
If you have a wildcard certificate and all your services are subdomains of the same domain ALB may be a good choice
If you want to use Let's encrypt with traefik Network LB may be a better choice
In both case your setup will look something like this :
[Internet]
|
[LB]
|
[Target group]
|
[Traefik]
| |
[service1] [service2]
In both case, easiest way to get this is to make traefik ecs services to auto register to the target group.
This can be done at service creation (network configuration section) and can not be done later. Link to documentation
Screen of configuration console

Dataflow worker unable to connect to Kafka through Cloud VPN

I have issues connecting a KafkaIO source to brokers available only through a Cloud VPN tunnel.
The tunnel is set up to allow traffic from a specific subnetwork (secure) and routes are set up and working for compute engines in that subnetwork.
Executing the pipeline with the DirectRunner KafkaIO is able to connect to the brokers, whether through the VPN on a standard compute engine in the secure subnetwork, or through a local machine with ssh tunnels setup by sshuttle.
Running the pipeline with the DataflowRunner connections to the brokers fail with:
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata. The pipeline gets executed within the secure subnetwork.
Connecting to the compute engine instance spanned by the job the following routes are visible:
jgrabber#REDACTED-harness-REDACTED ~ $ ip r
default via 10.74.252.1 dev eth0 proto dhcp src 10.74.252.3 metric 1024
default via 10.74.252.1 dev eth0 proto dhcp metric 1024
10.74.252.1 dev eth0 proto dhcp scope link src 10.74.252.3 metric 1024
10.74.252.1 dev eth0 proto dhcp metric 1024
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
The IPv4 addresses of the brokers are within a 172.17.0.0/16 (remote) network. The VPN is configured with a remote network range of 172.16.0.0/12.
Could the remote 172.17.0.0/16 network be shadowed by the virtual network setup and used by docker?

Resources