nftables rule for queueing udp packets to userspace libnetfilter_queue based program - netfilter

I have an iptables filter for queueing output packets to userspace program. The iptable filter works fine and I see the packets being forwarded to the userspace program for processing/mangaling.
iptables -A OUTPUT -p udp --dport 8080 -j NFQUEUE --queue-num 0
Since we are planning to use nftables going forward, using iptables-translate I was able to convert this filter to nft filter as below.
iptables-translate -A OUTPUT -p udp --dport 8080 -j NFQUEUE --queue-num 0
nft add rule ip filter OUTPUT udp dport 8080 counter queue num 0
If I add this rule to nftables, I see the following error.
sudo nft add rule ip filter OUTPUT udp dport 8080 counter queue num 0
Error: Could not process rule: No such file or directory
add rule ip filter OUTPUT udp dport 8080 counter queue num 0
^^^^^^
So did the following so the queue filter can be added to nftables.
nft add table ip filter
nft add chain ip filter OUTPUT
nft add rule ip filter OUTPUT udp dport 8080 counter queue num 0
Now I see rule added to nftable...
nft list ruleset
table ip filter {
chain OUTPUT {
udp dport 8080 counter packets 0 bytes 0 queue num 0
}
In nftables, with the above filter rule don't see packets being forwarded to my userspace program using libnetfilter_queue libraries.
Please let me know what I missing in the nftables queue filter rule?

Related

I can only access Docker containers from localhost

I have Docker installed on a VPS but unfortunately I can't access the containers from other machines.
I also have WireGuard running on 10.210.1.1 on the VPS. With docker run -d --name apache-server -p 8080:80 httpd I created the Apache2 container.
I can access http://10.210.1.1:8080 from the localhost with curl, but not from other machines. Services that have BareMetal installed can also be reached under the IP, so the problem should be with Docker.
Maybe it is also due to my nftables config:
define pub_iface = eth0
define wg_iface = wg0
define wg_port = 51821
table inet basic-filter {
chain input {
type filter hook input priority 0; policy drop;
ct state { established, related } accept
iif lo accept
ip protocol icmp accept
ip6 nexthdr ipv6-icmp accept
meta l4proto ipv6-icmp accept
iif $pub_iface tcp dport 51829 accept
iif $pub_iface udp dport $wg_port accept
iifname $wg_iface accept
ct state invalid drop
reject
}
chain forward {
type filter hook forward priority 0; policy drop;
ct state { established, related } accept
iifname $wg_iface oifname $wg_iface accept
iifname $wg_iface oifname $pub_iface accept
ct state invalid drop
reject with icmpx type host-unreachable
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
iifname $wg_iface oifname $pub_iface masquerade
}
}
I don't know but I surely have a mistake somewhere.
the following entry was missing in the forward chain:
iifname "wg0" oifname "docker0" accept
sorry

Cannot access Grafana (Docker image) using port 3000 from non-local server

I'm working on a Ubuntu server with Docker, in which I ran the Grafana Docker Image using the following commands (based on documentation: https://grafana.com/docs/grafana/latest/installation/docker):
docker volume create grafana-storage
docker run -d -p 3000:3000 --name=grafana -v grafana-storage:/var/lib/grafana grafana/grafana-oss
I can access to my Grafana instance successfully using localhost:3000, but the issue is that I cannot access from any external server x.x.x.x:3000. Not even locally unless I use localhost.
Nevertheless, I can access from an external server to x.x.x.x (port 80, which is being used by another process) from an external device.
I proceeded to check port configuration details:
netstat -an | grep "3000"
tcp 0 0 yy.yy.yy.yy:53596 yy.yy.yy.yy:3000 ESTABLISHED
tcp 0 0 yy.yy.yy.yy:53716 yy.yy.yy.yy:3000 ESTABLISHED
tcp 0 0 yy.yy.yy.yy:53700 yy.yy.yy.yy:3000 ESTABLISHED
tcp6 0 0 :::3000 :::* LISTEN
tcp6 0 0 ::1:3000 ::1:50602 ESTABLISHED
tcp6 0 0 ::1:50706 ::1:3000 ESTABLISHED
tcp6 0 0 ::1:3000 ::1:50706 ESTABLISHED
tcp6 0 0 ::1:50602 ::1:3000 ESTABLISHED
iptables -S | grep "3000"
-A INPUT -p tcp -m tcp --dport 3000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3000 -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 3000 -j ACCEPT
-A DOCKER -d yy.yy.yy.yy/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 3000 -j ACCEPT
iptables -L INPUT -nvx
Chain INPUT (policy ACCEPT 659 packets, 98721 bytes)
pkts bytes target prot opt in out source destination
136 8160 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000
And noted that firewall was inactive, then I enable it and put a rule:
sudo ufw enable
sudo ufw allow 3000/tcp comment 'grafana-port'
sudo ufw reload
sudo ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] 3000/tcp ALLOW IN Anywhere # grafana-port
[ 2] 3000/tcp (v6) ALLOW IN Anywhere (v6) # grafana-port
Applying this change did not make x.x.x.x:3000 work. However, port 80 is still working normally as I mentioned. Maybe I am missing something else.
More info: I tried using domain = x.x.x.x and serve_from_subpath = true in grafana.ini, defaults.ini and custom.ini files. It did not work.
Have you any advice or contribution in order to solve this issue? Thank you in advance!
Have you tried the following?
docker run -p x.x.x.x:3000:3000 ...
Issue was solved letting the firewall inactive, and connecting to the server's router at tplinkwifi.net. There I put this router configuration on advanced > NAT forwarding > Port forwarding:
Device IP Address: 192.168.x.x
External Port: 3000
Internal Port: 3000
Protocol: TCP
Then restarted the docker container and got access from non-local server.
Thank you for your suggestions.

Iptables rules with dockerized web application - can't block incoming traffic

i'm hosting a dockerized web application binded with port 8081 in my remote server.
I want to block that web application for external ips, as I already did wit port 8080 hosting a plain jenkins server.
Here's what i've tried:
iptables -A INPUT -d <my-server-ip> -p tcp --dport 8081 -j DROP
As I did with port 8080.
Here is
iptables -nv -L INPUT
output:
Chain INPUT (policy ACCEPT 2836 packets, 590K bytes)
pkts bytes target prot opt in out source destination
495 23676 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8080
0 0 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8081
Has it possibily something to do with DOCKER chain in iptables ?
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
9 568 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 <container-eth1-addr> tcp dpt:8080
There are more specific rules i need to add ?
Isn't my server INPUT rules supposed to be applied before those listed in the DOCKER chain?
UPDATE - SOLVED
Thanks to larsks's comments I found the solution.
The goal here was to block tcp traffic on port 8081 binded with docker docker container but being able to use ssh tunneling as "poor man" VPN (so non publish the port was not an option).
Just had to add this rule:
iptables -I DOCKER-USER 1 -d <container-eth-ip> ! -s 127.0.0.1 -p tcp --dport 8080 -j DROP

Docker swarm mode routing mesh not work as expected

I tried to create services in docker swarm model by following this document
I created two nodes in the swarm:
Then create the deploy the service, I use jwilder/whoami here instead of nginx in the document,
docker service create --name my-web --publish published=8888,target=8000 --replicas 2 jwilder/whoami
Seems like they started successfully:
As the document said:
When you access port 8080 on any node, Docker routes your request to
an active container.
SO in my opinion, I can access the my-web service from any of the node, however I found that only one node work:
What's going on?
This can be caused by ports being blocked between the nodes. The swarm mesh networking uses the "ingress" network to connect the published port to a VIP for the service. That ingress network is an overlay network implemented with vxlan. For that you need:
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
Reference: https://docs.docker.com/network/overlay/
It's possible for these ports to be blocked at many levels, including iptables, firewalls on the routers, and I've even seen VMware block this with their NSX tool that also implemented vxlan.
For iptables, I typically use the following commands:
iptables -A INPUT -p tcp -m tcp --dport 2376 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 2377 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 7946 -j ACCEPT
iptables -A INPUT -p udp -m udp --dport 7946 -j ACCEPT
iptables -A INPUT -p tcp -m udp --dport 4789 -j ACCEPT
iptables -A INPUT -p 50 -j ACCEPT
The above will differ if you use firewalld or need to change firewall rules on the network routers.

Docker run cannot publish port range despite netstat indicates that ports are available

I am trying to run a Docker image from inside Google Cloud Shell (i.e. on an courtesy Google Compute Engine instance) as follows:
docker run -d -p 20000-30000:10000-20000 -it <image-id> bash -c bash
Previous to this step, netstat -tuapn has reported the following:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8998 0.0.0.0:* LISTEN 249/python
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13080 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13081 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13082 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13083 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13084 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 127.0.0.1:48161 ESTABLISHED -
tcp 0 252 172.17.0.2:22 173.194.92.34:49424 ESTABLISHED -
tcp 0 0 127.0.0.1:48161 127.0.0.1:34490 ESTABLISHED 15784/python
tcp6 0 0 :::22 :::* LISTEN -
So it looks to me as if all the ports between 20000 and 30000 are available, but the run is nevertheless terminated with the following error message:
Error response from daemon: Cannot start container :
failed to create endpoint on network bridge: Timed out
proxy starting the userland proxy
What's going on here? How can I obtain more diagnostic information and ultimately solve the problem (i.e. get my Docker image to run with the whole port range available).
Opening up ports in a range doesn't currently scale well in Docker. The above will result in 10,000 docker-proxy processes being spawned to support each port, including all the file descriptors needed to support all those processes, plus a long list of firewall rules being added. At some point, you'll hit a resource limit on either file descriptors or processes. See issue 11185 on github for more details.
The only workaround when running on a host you control is to not allocate the ports and manually update the firewall rules. Not sure that's even an option with GCE. Best solution will be to redesign your requirements to keep the port range small. The last option is to bypass the bridge network entirely and run on the host network where there are no more proxies and firewall rules with --net=host. The later removes any network isolation you have in the container, so tends to be recommended against.

Resources