I can only access Docker containers from localhost - docker

I have Docker installed on a VPS but unfortunately I can't access the containers from other machines.
I also have WireGuard running on 10.210.1.1 on the VPS. With docker run -d --name apache-server -p 8080:80 httpd I created the Apache2 container.
I can access http://10.210.1.1:8080 from the localhost with curl, but not from other machines. Services that have BareMetal installed can also be reached under the IP, so the problem should be with Docker.
Maybe it is also due to my nftables config:
define pub_iface = eth0
define wg_iface = wg0
define wg_port = 51821
table inet basic-filter {
chain input {
type filter hook input priority 0; policy drop;
ct state { established, related } accept
iif lo accept
ip protocol icmp accept
ip6 nexthdr ipv6-icmp accept
meta l4proto ipv6-icmp accept
iif $pub_iface tcp dport 51829 accept
iif $pub_iface udp dport $wg_port accept
iifname $wg_iface accept
ct state invalid drop
reject
}
chain forward {
type filter hook forward priority 0; policy drop;
ct state { established, related } accept
iifname $wg_iface oifname $wg_iface accept
iifname $wg_iface oifname $pub_iface accept
ct state invalid drop
reject with icmpx type host-unreachable
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
iifname $wg_iface oifname $pub_iface masquerade
}
}
I don't know but I surely have a mistake somewhere.

the following entry was missing in the forward chain:
iifname "wg0" oifname "docker0" accept
sorry

Related

How to prevent docker containers from accessing my local network

I would like to be able to prevent docker containers connected to a bridge network from accessing my local network in order to add extra security since they will be accessible from outside (in case a container is compromised). I saw that I should probably use ebtables or the physdev module of iptables but I can't create a rule that works. Thanks to the one who can help me.
After some research and if anyone is interested, it is possible to use ebtables.
# Authorize DNS queries
ebtables -A INPUT -p IPV4 --ip-protocol TCP --ip-destination-port 53 --ip-destination 192.168.1.1 --ip-source 172.18.0.0/16 -j ACCEPT
ebtables -A INPUT -p IPV4 --ip-protocol UDP --ip-destination-port 53 --ip-destination 192.168.1.1 --ip-source 172.18.0.0/16 -j ACCEPT
# Drop all others packets
ebtables -A INPUT -p IPV4 --ip-destination 192.168.1.0/24 --ip-source 172.18.0.0/16 -j DROP
Do not forget to replace the 172.18.0.0/16 subnet with the one on which your containers are connected.
I was stumbling through this myself and found one solution was to insert (-I) a new rule into the DOCKER-USER chain.
Please see this answer: https://stackoverflow.com/a/73994723/20189349

How to grant internet access to application servers through load balancer

I have setup an environment in Jelastic including a load balancer (tested both Apache and Nginx with same results), with public IP and an application server running Univention UCS DC Master docker image (I have also tried a simple Ubuntu 20.04 install).
Now the application server has a private IP address and is correctly reachable from the internet, also I can correctly SSH into both, load balancer and app server.
The one thing I can't seem to achieve is to have the app server access the internet (outbound traffic).
I have tried setting up the network in the app server and tried a few Nginx load-balancing configurations but to be honest I've never used a load balancer before and I feel that configuring load balancing will not resolve my issue (might be wrong).
Of course my intention is to learn load balancing but if someone could just point me in the right direction I would be so grateful.
Question: what needs to be configured in Jelastic or in the servers to have the machines behind the load balancer access the internet?
Thank you for your time.
Cristiano
I was able to resolve the issue by simply detaching and re-attaching the public IP address to the server, so it was no setup problem just something in Jelastic got stuck..
Thanks all!
Edit: Actually to effectively resolve the issue, I have to detach the public IP address from the univention/ucs docker image, attach it to another node in the environment (ie an Ubuntu server I have), then attach the public IP back to the univention docker image. Can’t really figure why but works for me.
To have the machines access the internet you should add a route in them using your load balancer as a gw like this:
Destination GW Genmask
0.0.0.0 LB #IP 255.255.255.0
Your VMs firewalls should not block 80 and 443 ports for in/out traffic, using iptables :
sudo iptables -A INPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
In your load balancer you should masquerade outgoing traffic (change source ip) and forward input traffic to your vms subnet using the LB interface connected to this subnet:
sudo iptables --table NAT -A POSTROUTING --out-interface eth0 -j MASQUERADE
sudo iptables -A FORWARD -p tcp -dport 80 -i eth0 -o eth1 -j ACCEPT
sudo iptables -A FORWARD -p tcp -dport 443 -i eth0 -o eth1 -j ACCEPT
You should enable ip forwarding in your load balancer
echo 1 > /proc/sys/net/ipv4/ip_forward

iptables: Access from local machine to docker container is not possible

I have an issue in regards to my iptables setup. My goal is to reach the https based webserver inside a docker container from the server machine itself.
The setup is the following:
The server is connected to the internet via eth0 and serves http via port 443.
Any users from the outside (internet) reach the server via the ip address 1.2.3.4.
It is connected to the internal network via eth1 and serves dhcp, dns and some more services.
Any users from the inside (intranet) reach the server via the ip address 10.0.0.1.
The docker container is connected via docker1 on the server. The later has the ip address 10.8.0.2 inside the docker network.
The docker container serves the webserver on port 1443, but iptables forwards (NAT) requests on port 443 to its address 10.8.0.1 and the destination port 1443.
What is working:
The webserver is perfectly reachable from the internet and the intranet.
The webserver can be reached from the server itself using the address 10.8.0.1:1443.
What is not working:
Any client which is working directly on the server can not reach the docker webserver using https://example.com:443. Using https://10.8.0.1:8443 would work, but fails due to a certificate error. It is not a goal to skip the certificate check as a workaround.
Excerpt of the iptable configuration:
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -P FORWARD DROP
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i docker1 -o docker1 -j ACCEPT
iptables -A PREROUTING -t nat -p tcp -d 1.2.3.4 --dport 443 -j DNAT --to-destination 10.1.0.1:1443
iptables -A FORWARD -o docker1 -p tcp --dport 1443 -j ACCEPT
iptables -A INPUT -i docker1 -j DROP
iptables -A FORWARD -i docker1 -j DROP
Due to that "complicated" setup I am no longer able to understand which of the iptable rules and chains need to be applied to make this work so I am seeking for your help to solve that issue.
Brainstorming about the issue using a simplified model and my understanding of the iptable chains the way of the packages might/should look like this:
Origin is a local application (wget).
The packages go through the OUTPUT table.
The packages go through the POSTROUTING table.
Magic happens...
The packages arrive again in the PREROUTING table.
The packages might go trough INPUT again.
The packages might arrive at the target application (webserver).

Docker-Swarm: Join a docker-swarm from another subnet

I have 4 virtual machines in the same subnet, which are part of a docker-swarm.
Now I want connect another node (virtual machine), which is located in a different country (not the same subnet).
I am an IP noob and it is hard for me to set up an overlay network in docker, which is able to handle this connection.
Which aspects I need to keep in mind, by setting up this kind of docker-swarm?
You need the following ports open between your swarm nodes:
2377/tcp: Swarm mode api
7946/both: Overlay networking control
4789/udp: Overlay networking data
protocol 50 for ipsec (secure option) of overlay networking
The following iptables commands can be used for this (you may want to limit the source host to only your other docker swarm nodes):
iptables -A INPUT -p tcp -m tcp --dport 2377 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 7946 -j ACCEPT
iptables -A INPUT -p tcp -m udp --dport 7946 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT
iptables -A INPUT -p 50 -j ACCEPT
This needs to be configured on all of your swarm nodes if they have a restrictive host firewall, and on the network firewalls protecting your subnets.

RoR: Securing a Closed API

I have two rails apps on separate virtual servers, but in the same facility. Both apps can communicate via local ip addresses.
This is a two part question:
1) How do I check where the request is originating and limit requests only to those from that location?
2) Do you think this would be secure enough?
My gut is telling me this isn't secure enough because of IP spoofing, but I'm thinking OAuth or similar is a little too hardcore for my needs. Though, maybe not.
This is the first time I've approached something like this and I'm looking for anyone that can push me in the right direction here.
Thanks.
Depending on who's hosting you, the local network (to which your local addresses belong) could be a private network only accessible to your instances or, more likely, it would be shared with other virtual machines that do not belong to you. You would not be open to direct external attacks, but any compromised virtual machine sharing the same local network as you can be a springboard for attack, so your concerns are absolutely valid.
Answering, in order, your two concerns:
Configure iptables for the local interfaces to only accept requests coming on specific ports from specific local IPs (read a tutorial for a better understanding of iptables configuration.) All other virtual machines on the local network should not be able to probe you, although they might be able to intercept your traffic (addressed below.)
No; you should use SSL over all intra-node connections. This will protect you in two ways: firstly it will protect you from spoofing (an attacker will be rejected if he does not have your certificate, even if he bypasses iptables by spoofing his address, or because your iptables config gets overwritten by an admin), and secondly it will protect your data from prying eyes (e.g. an attacker will not be able to snoop traffic for your passwords.) Some applications (e.g. most database engines, net-snmpd set up in v3 mode, etc.) support SSL natively. Alternatively, establish and use ssh tunnels, or use stunnel
Sample base iptables configuration allowing basic services (HTTP, SSH etc.) on the public (internet) interface, as well as allowing www1 and www2 to connect to this node's MySQL on port 3306 on the eth0 interface (www1 and www2 are defined in /etc/hosts so they resolve to the appropriate IP addresses.):
# * raw
#
# Allows internal traffic without loading conntrack
# -A PREROUTING -i lo -d 127.0.0.0/8 -j NOTRACK
*filter
# Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT -i ! lo -d 127.0.0.0/8 -j DROP
# Accepts all established inbound connections (TCP, UDP, ICMP incl. "network unreachable" etc.)
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allows all outbound traffic
# You can modify this to only allow certain traffic
-A OUTPUT -j ACCEPT
# Allows HTTP and HTTPS connections from anywhere (the normal ports for websites)
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
# Allows SSH connections
-A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT
# Allow ping
-A INPUT -p icmp -m icmp --icmp-type echo-request -j ACCEPT
# log iptables denied calls
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level debug
# Reject all other inbound - default deny unless explicitly allowed policy
-A INPUT -j REJECT
-A FORWARD -j REJECT
# Allows MySQL within our cluster ONLY
-A INPUT -p tcp -s www1 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p udp -s www1 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p tcp -s www2 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p udp -s www2 -i eth0 --dport 3306 -j ACCEPT
COMMIT
This doesn't really sound like a Rails question, it's more of a question about web architecture. I'm assuming that both machines are accessible to the outside world via HTTP. If that's the case, you may want to consider putting a firewall in front of both machines to create a local network that the two machines are on.
Once you've done that, you should be able to configure the firewall to disallow requests based on any criteria you specify. Given that this is a Rails application I'm going to assume that the API is a set of resources. If that's the case you could configure your firewall to filter requests to the private API.
This way, the machines on the local network can communicate freely as their requests to one another aren't going through the firewall.

Resources