I have a clean install of Openstack Pike on Ubuntu 16.04 server with OvS bridge. When using iptables_hybrid as the firewall driver, I have no problem sending SCTP packets to VMs from the external network. However, when using the native openvswitch firewall driver, SCTP packets never arrive at the VM but TCP/UDP works fine.
I have tried adding SCTP in the policy groups and I have also created ports with security disabled but nothing helped.
Neutron is configured with DVR and redundant DHCP, otherwise it's pretty standard configuration based on the install guide for OvS with self-service networks. I can provide log and config files if needed.
Any ideas what might be causing this? Iptables has a huge performance impact on the network and I would like to go back to openvswitch firewall.
Related
So I am having a pterodactyl installation on my node,
I am aware that pterodactyl runs using docker so to protect my Backend IP from being exposed when connecting to the servers I am using a GRE Tunnel from X4B.net
After installing the script I was provided by X4B I got this message
Also Note: This script does not adjust the configuration of your applications. You should ensure your applications are bound to 0.0.0.0 or the appropriate tunnel IP.
At first I was confused and tried connecting to my server but nothing worked, so I was thinking that it was due the docker not being bounded to 0.0.0.0
As for the network layout I was provided with:
10.16.1.200/30 Network,
10.16.1.201 Unified Gateway,
10.16.1.202 Bound via NAT to 103.249.70.63,
10.16.1.203 Broadcast
So If I host a minecraft server what IP address would I use?
I have a legacy system. It contains a number of servers running on Linux and a number of GUI clients running on Windows. All the components (servers and clients) are in the same network and they communicate with each other directly. They are identified by ip and port number.
For development purpose, I now run the servers in containers using compose on a Linux host. The servers communicate with each other within the docker network without any issues. However, I have trouble to make the client work with servers. Port mapping doesn't work here since a client needs to talk to many servers with different or same port. What I am asking is if it is possible to treat the Windows client as part of the docker network. I read about tools such as weave net, etc., but haven't found anything useful. Any suggestions?
I would like to install a DHCP server in a container to provide the devices (some raspberry pis and network switches) connected to the host system with IP addresses.
I start the container with "--net=host" flag in order to listen on broadcast traffic. It is working as expected. All devices get their IP address from the DHCP server.
However, the "--net=host" option represents an increased security risk. Do you know if there is a better option to acchieve the same? I could install both docker and podman on my system.
If there is no other option, how could I restrict the visibility of the network from the container so that it can only see the specific network interface where all devices are connected?
I have a mongodb docker container I only want to have access to it from inside of my server, not out side. even I blocked the port 27017/tcp with firewall-cmd but it seems that docker is still available to public.
I am using linux centos 7
and docker-compose for setting up docker
I resolved the same problem adding an iptables rule that blocks 27017 port on public interface (eth0) at the top of chain DOCKER:
iptables -I DOCKER 1 -i eth0 -p tcp --dport 27017 -j DROP
Set the rule after docker startup
Another thing to do is to use non-default port for mongod, modify docker-compose.yml (remember to add --port=XXX in command directive)
For better security I suggest to put your server behind an external firewall
If you have your application in one container and MongoDb in other container what you need to do is to connect them together by using a network that is set to be internal.
See Documentation:
Internal
By default, Docker also connects a bridge network to it to provide
external connectivity. If you want to create an externally isolated
overlay network, you can set this option to true.
See also this question
Here's the tutorial on networking (not including internal but good for understanding)
You may also limit traffic on MongoDb by Configuring Linux iptables Firewall for MongoDB
for creating private networks use some IPs from these ranges:
10.0.0.0 – 10.255.255.255
172.16.0.0 – 172.31.255.255
192.168.0.0 – 192.168.255.255
more read on Wikipedia
You may connect a container to more than one network so typically an application container is connected to the outside world network (external) and internal network. The application communicates with database on internal network and returns some data to the client via external network. Database is connected only to the internal network so it is not seen from the outside (internet)
I found a post here may help enter link description here. Just post it here for people who needed it in future.
For security concern we need both hardware firewall and OS firewall enabled and configured properly. I found that firewall protection is ineffective for ports opened in docker container listened on 0.0.0.0 though firewalld service was enabled at that time.
My situation is :
A server with Centos 7.9 and Docker version 20.10.17 installed
A docker container was running with port 3000 opened on 0.0.0.0
The firewalld service had started with the command systemctl start firewalld
Only ports 22 should be allow access outside the server as the firewall configured.
It was expected that no one others could access port 3000 on that server, but the testing result was opposite. Port 3000 on that server was accessed successfully from any other servers. Thanks to the blog post, I have had my server under firewall protected.
I want to telnet to check firewall opened from Solace to a server.
Telnet command is not available in Solace.
How do you do a telnet from Solace appliance?
The answer depends on which VRF you want to check. There are two VRFs on the solace appliance: management and msg-backbone.
The msg-backbone VRF is mainly for the data plane where all messages get brokered. This is on specialized hardware, and there is no support for the regular Linux networking stack. Telnet naturally is not supported. You can, however, test ping connectivity through the CLI: ping msg-backbone:<ip-address>.
The management VRF, as the name suggests is for management traffic. This is on a typical ethernet NIC and is on the Linux networking stack. SolOS base is a CentOS-7 image. So the answer can be found in https://serverfault.com/questions/788934/check-if-remote-host-port-is-open-cant-use-gnu-netcat-nor-nmap-rhel-7.