I don't have much Linux experience but I have run into a problem with firewalld.
My friend and I recently used certbot to get our ssl certificate for a Linode box.
The plan is for nginx to serve a NEXTJS application on subdomain.domain.com and a RESTful api running via Docker on subdomain.domain.com/api.
The http configuration worked well.
But under https firewalld does not allow external connections on port 443 for mysterious reasons.
I reached this conclusion because by using the power of sudo systemctl stop firewalld everything worked nicely again.
Expectation is that after adding common services including https to the docker zone with the firewalld CLI https traffic should be allowed without having to disable the firewall.
Steps to reproduce are:
sudo systemctl start firewalld
sudo firewall-cmd --get-active-zones
docker
interfaces: docker0
sudo firewall-cmd --zone=docker --add-service=http --permanent
sudo firewall-cmd --zone=docker --add-service=httpd --permanent
sudo firewall-cmd --zone=docker --add-service=dns --permanent
sudo firewall-cmd --zone=docker --add-service=dhcpv6-client --permanent
sudo firewall-cmd --reload
sudo firewall-cmd --get-active-zones
docker
interfaces: docker0
sudo firewall-cmd --zone=docker --list-services
dhcpv6-client dns http https
Still getting a timeout error on subdomain.domain.com.
sudo firewall-cmd --zone=docker --add-port=443/tcp --permanent
sudo firewall-cmd --reload
sudo firewall-cmd --zone=docker --list-ports
443/tcp
Still getting a timeout error on subdomain.domain.com.
Suboptimal work-around is to use the power of sudo systemctl stop firewalld to disable firewall protection.
sudo lsof -i :443 shows four nginx processes.
The OS is openSUSE but I can't recall if it's LEAP or Tumbleweed. All packages up-to-date.
I used the website https://www.yougetsignal.com/tools/open-ports/ to diagnose the problem.
I'm sure that I forgot a lot of important details but I will amend with edits later if requested.
Cheers.
I think that the server just needed to be restarted in the end. We have accomplished our aims now.
My friend says that he noticed that the node16 process was 'running away' and using 99% of processing resources, which is apparently a common problem.
Related
I have a CentOS 7 server which was running happily for 600+ days until it was rebooted recently, after which incoming web requests were receiving HTTP523 (Origin Is Unreachable) error codes (via Cloudflare, if that makes a difference?) unless I stopped the firewalld service. Things run fine without firewalld, but I'd rather not leave it disabled!
I've tried stopping docker and firewalld and restarting them in various sequences, but the same 523 error occurs unless I stop firewalld.
/var/log/firewalld contains a few warnings that might help:
WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i br-8acb606a3b50 -o br-8acb606a3b50 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
WARNING: AllowZoneDrifting is enabled. This is considered a n insecure configuration option. It will be removed in a future release. Please consider disabling it now.
WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER' failed: iptables v1.4.21: Couldn't load target 'DOCKER':No such file or directory
WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).
WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?)
WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.
I've found seemingly conflicting advice around the place regarding any manual configuration/commands required:
firewall-cmd --permanent --zone=trusted --add-interface=docker0 on a CentOS forum
firewall-cmd --zone=trusted --remove-interface=docker0 --permanent on the offical Docker docs -- surely that's the opposite of the above?
a bunch of manual firewall-cmd commands on a Docker github issue -- surely all of that isn't required?
this one looks promising -- nmcli, NetworkManager and firewall-cmd --permanent --zone=trusted --change-interface=docker0
I don't fully understand where the br-8acb606a3b50 interface comes from, or whether I need to do anything to configure it as well as docker0 if I use a solution like 4. above? It was all working fine automatically for years until the reboot!
Are some magic firewalld incantations now required (and why?!) or is there some way I can get the system to get back into the correct auto/default configuration it was in prior to rebooting?
$ docker -v
Docker version 20.10.5, build 55c4c88
$ firewall-cmd --version
0.6.3
$ firewall-cmd --get-zones
block dmz docker drop external home internal public trusted work
To recap the chat investigation, this particular problem wasn't related to Docker and containers. The problem was in firewalld not having rules for NGINX running as a proxy for containers on the host. The solution was to add permanent firewalld rules for HTTP and HTTPS traffic:
sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload
Warning messages like this one:
WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i br-8acb606a3b50 -o br-8acb606a3b50 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?)
... can appear during normal operation, when Docker attempts to delete a rule without checking its existence first. In other words, containers can be running smoothly even when there are warnings like this.
I had some similar problems with Podman and for me i had to upgrade from Debian 9 to Debian 10 in order to fix it, because of the way firewalld handles iptables vs nftables.
I'm exploring creating a gateway that can start and stop docker containers on a rhel7 system upon. I've made changes to my /usr/lib/systemd/system/docker.service to start docker on an interface with the following.
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:23751 --containerd=/run/containerd/containerd.sock
I'm unable to connect to dockerd to get the status of the containers unless I disable the firewall. But if I disable the firewall, I can't start conatiners.
Caused by: com.amihaiemil.docker.UnexpectedResponseException: Expected status 204 but got 500 when calling
http://192.168.1.70:23751/v1.35/containers/e3f0f09269a699ec27bbac8a5027d1383ae15cf64b5e6b649e76be1297cc2535/start.
Response body was {"message":"driver failed programming external connectivity on endpoint hello-service
(eef135f889322f1899800f19612404e9d8b1f39c7866f31ca5059562aa501bf6):
(iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 34570 -j DNAT --to-destination 192.168.10.40:8080 ! -i br-4982fe847356: iptables: No chain/target/match by that name.\n (exit status 1))"}
I realize there are consequences of running an open tcp port for dockerd. Before, I get everything secure, I would like to get an idea of how a gateway might do something like this.
Does anyone else have experience doing something like this?
After much trial and error, I found out that firewalld is blocking that port.
To enable the port, do the following.
sudo firewall-cmd --zone=public --add-port=2375/tcp
Please note, doing this opens a very large security vulnerability as the commenter above has pointed out. In my case, this was done behind a firewall where no outside connections can make a connection to inside my network's firewall. This is still a bad idea, but in this case it is being used to explore some concepts and is turned off when not being used. Please explore the security implications when doing this.
Also, the firewall will not save the configuration in the above command unless you use the --permanent argument
I'm trying to open a random port, so that a docker container can bind to it.
sudo firewall-cmd --permanent --add-port=27000/tcp # success
sudo firewall-cmd --reload # success
sudo netstat -tunap | grep -i listen # port 27000 doesn't show up
sudo lsof -i :27000 # No process is using port 27000
curl -v hostname:27000 # Failed connect to; connection refused
How can I open a port and make it listen.
Thank you
I'm running a virtual machine on GCE and Centos 7. I've configured the machine with two network interfaces. When doing so, the user is required to enter the following commands to configure eth1 (every interface except eth0 requires this approach). On my machine, eth1's gateway is 10.140.0.1.
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I have used the above with success, but the configuration is not persistent. I know it's possible to do so, but I first need to fully understand what the above is actually doing (breaking my problem into smaller parts).
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
This command seems to be telling eth1 at 10.140.0.2 to broadcast on the same internal IP. It's also setting MTU to 1430, which is strange because the other interfaces are set to 1460. Is this command really needed?
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
From what I read, this command is appending "1 rt1" to the file rt_tables. If this is run once, does it need to be run each time the network comes up? Seems like it only needs to be run once.
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I know these commands add non-persistent rules and routes to the network configuration. Once I know the answers to the above, I will come back to the approach of making this persistent.
Referring to your question on Google group thread, as I had mentioned in the post:
IP routes and IP rules needs to be persistent routes to avoid the routes being lost after VM reboot or network services restart. Depending upon the operating system configuration files required to make the routes persistent can be different. Here is a stackexchange thread for CentOS 7, mentioning files: "/etc/sysconfig/network-scripts/route-ethX" and "/etc/sysconfig/network/scripts/rule-ethX" to keep the IP route and rule peristent. Here is the CentOS documentation for the persistent static routes.
I have virtualbox with Ubuntu 14.10 server, Jenkins and Apache installed. When I access the IP of this virtualbox the homepage of apache is load correctly. But when I try to acces jenkins via x.x.x.x:8080 (ip of my virtualbox) it won't load. I only get a connection time out error
I tried to configure a different port (8081 and 6060) but that doesn't work. I also add a port forwarding to VirtualBox but that doesn't work ether...
Anyone suggestions how I can acces jenkins that is running inside a virtual machine?
Depending on whether or not you need the box to be accessible by machines other than your host, you need a bridged or host-only network interface https://www.virtualbox.org/manual/ch06.html
I've just done a full install of Nginx, Java, and Jenkins:
sudo apt-get install nginx
sudo apt-get install openjdk-7-jdk
wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins
on a fresh Ubuntu VirtualBox instance where the first interface is Host-only and the second is NAT:
Here is my /etc/network/interfaces:
# The loopback network interface
auto lo
iface lo inet loopback
# Host-only interface
auto eth0
iface eth0 inet static
address 192.168.56.20
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
# NAT interface
auto eth1
iface eth1 inet dhcp
I can reach Jenkins from my host on 192.168.56.20:8080 with no port forwarding necessary. You must have something unrelated to Jenkins going on, possibly firewall related. Try setting Jenkins back to 8080, removing your port forwarding, and check for firewall rules that could be getting in the way.