Port 5432 is closed on Google Compute Engine - ruby-on-rails

Currently I need to establish remote connection with my server (Ubuntu 16.04 LTS).
I Install Postgresql and I made the following settings:
/etc/postgresql/9.5/main/postgresql.conf:
listen_addresses='*'
/etc/postgresql/9.5/main/pg_hba.conf:
host all all 0.0.0.0/0 md5
If run this command: netstat -anpt | grep LISTEN
shows the port is listening
but when I try to establish the connection, I have this error:
And this tool tells me that the port is closed:

Allowing only on Configurations of Postgresql server is not enough. You need to add a firewall rule in google compute engine. Check this
Firewall rules control incoming or outgoing traffic to an instance. By default, incoming traffic from outside your network is blocked.

Related

Alternative to Cloudflare tunnel if I can't open port 7844 and no root access on hosting server

I have a linux server hosting an app that I want to expose using my namecheap domain name.
The network that the linux server is behind seems to be blocking port 7844, docker error:
"ERR Serve tunnel error error="DialContext error: dial tcp xxx:7844: i/o timeout" connIndex=0 ip=xxx
ERR Unable to establish connection with Cloudflare edge error="DialContext e rror: dial tcp xxx.:7844: i/o timeout" connIndex=0 ip=xxx.33
"
Works fine on machines on another network, linux and windows. So looks to be the network, which I can't port forward on.
I found SirTunnel: https://github.com/anderspitman/SirTunnel but this requires sudo on my siteground server, which isn't possible.
Are there any free alternatives I can use? Or a way I can use cloudflare through a different port?
Thanks

Joining a Docker swarm

I have 2 VMs.
On the first I run:
docker swarm join-token manager
On the second I run the result from this command.
i.e.
docker swarm join --token SWMTKN-1-0wyjx6pp0go18oz9c62cda7d3v5fvrwwb444o33x56kxhzjda8-9uxcepj9pbhggtecds324a06u 192.168.65.3:2377
However, this outputs:
Error response from daemon: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 192.168.65.3:2377: connect: connection refused"
Any idea what's going wrong?
If it helps I'm spinning up these VMs using Vagrant.
Just add the port to firewall on master side
firewall-cmd --add-port=2377/tcp --permanent
firewall-cmd --reload
Then again try docker swarm join on second VM or node side
I was facing similar issue. and I spent couple of hours to figure out the root cause and share to those who may have similar issues.
Environment:
Oracle Cloud + AWS EC2 (2 +2)
OS: 20.04.2-Ubuntu
Docker version : 20.10.8
3 dynamic public IP+ 1 elastic IP
Issues
create two instances on the Oracle cloud at beginning
A instance (manager) docker swarm init --advertise-addr success
B instance (worker) docker join as worker is worker success
when I try to promo B as manager, encountered error
Unable to connect to remote host: No route to host
5. mesh routing is not working properly.
Investigation
Suspect it is related to network/firewall/Security group/security list
ssh to B server (worker), telnet (manager) 2377, with same error
Unable to connect to remote host: No route to host
3. login oracle console and add ingress rule under security list for all of relative port
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
4. try again but still not work with telnet for same error
5. check the OS level firewall. if has disable it.
systemctl ufw disable
6. try again but still not work with same result
7. I suspect there have something wrong with oracle cloud, then I decide try to use AWS install the same version of OS/docker
8. add security group to allow all of relative ports/protocol and disable ufw
9. test with AWS instance C (leader/master) + D (worker). it works and also can promote D to manager. mesh routing was also work.
10. confirm the issue with oracle cloud
11. try to join the oracle instance (A) to C as worker. it works but still cannot promote as manager.
12. use journalctl -f  to investigate the log and confirm there have socket timeout from A/B (oracle instances) to AWS instance(C)
13. relook the A/B, found there have iptables block request
14. remove all of setup in the iptables
# remove the rules
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
15. remove all of setup in the iptables
Root Cause
It caused by firewall either in cloud security/WAF/ACL level or OS firewall/rules. e.g. ufw/iptables
I did firewall-cmd --add-port=2377/tcp --permanent firewall-cmd --reload already on master side and was still getting the same error.
I did telnet <master ip> 2377 on worker node and then I did reboot on master.
Then it is working fine.
It looks like your docker swarm manager leader is not running on port 2377. You can check it by firing this command on your swarm manager leader vm. If it is working just fine then you will get similar output
[root#host1]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
tilzootjbg7n92n4mnof0orf0 * host1 Ready Active Leader
Furthermore you can check the listening ports in leader swarm manager node. It should have port tcp 2377 for cluster management communications and tcp/udp port 7946 for communication among nodes opened.
[root#host1]# netstat -ntulp | grep dockerd
tcp6 0 0 :::2377 :::* LISTEN 2286/dockerd
tcp6 0 0 :::7946 :::* LISTEN 2286/dockerd
udp6 0 0 :::7946 :::* 2286/dockerd
In the second vm where you are configuring second swarm manager you will have to make sure you have connectivity to port 2377 of leader swarm manager. You can use tools like telnet, wget, nc to test the connectivity as given below
[root#host2]# telnet <swarm manager leader ip> 2377
Trying 192.168.44.200...
Connected to 192.168.44.200.
For me I was on linux and windows. My windows docker private network was the same as my local network address. So docker daemon wasn't able to find in his own network the master with the address I was giving to him.
So I did :
1- go to Docker Desktop app
2- go to Settings
3- go to Resources
4- go to Network section and change the Docker subnet address (need to be different from your local subnet address).
5- Then apply and restart.
6- use the docker join on the worker again.
Note: All this steps are performed on the node where the error appear. Make sure that the ports 2377, 7946 and 4789 are opens on the master (you can use iptables or ufw).
Hope it works for you.

Port Forwarding for compute engine google cloud platform

I'm trying to open port TCP 28016 and UDP 28015 for a game server in my compute engine VM running on Microsoft Windows Server 2016.
I've tried opening the opening inside my server using RDP, going to Windows Firewall setting and creating new inbound rules for both TCP 28016 and UDP 28015.
Also done setting firewall rules on my Cloud Platform Firewall Rules for both port.
When running my game server application, running netstat didn't show any of the port being used / not listening . Not even shows up. What did i do wrong ?
Edit : it now shows up on netstat -a -b , but didn't have LISTENING
If it doesn't show as LISTENING, it's not a firewall or "port forwarding" issue; rather, the application either isn't running, or is running but isn't configured to listen for connections on that port.

How to judge a port is open or closed

How I can say a port is open or closed. What's the exact meaning of Open port and closed port.
My favorite tool to check if a specific port is open or closed is telnet. You'll find this tool on all of the operating systems.
The syntax is: telnet <hostname/ip> <port>
This is what it looks like if the port is open:
telnet localhost 3306
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
This is what it looks like if the port is closed:
telnet localhost 9999
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
Based on your use case, you may need to do this from a different machine, just to rule out firewall rules being an issue. For example, just because I am able to telnet to port 3306 locally doesn't mean that other machines are able to access port 3306. They may see it as closed due to firewall rules.
As far as what open/closed ports means, an open port allows data to be sent to a program listening on that port. In the examples above, port 3306 is open. MySQL server is listening on that port. That allows MySQL clients to connect to the MySQL database and issue queries and so on.
There are other tools to check the status of multiple ports. You can Google for Port Scanner along with the OS you are using for additional options.
A port that's opened is a port to which you can connect (TCP)/ send data (UDP). It is open because a process opened it.
There are many different types of ports. These used on the Internet are TCP and UDP ports.
To see the list of existing connections you can use netstat (available under Unix and MS-Windows). Under Linux, we have the -l (--listen) command line option to limit the list to opened ports (i.e. listening ports).
> netstat -n64l
...
tcp 0 0 0.0.0.0:6000 0.0.0.0:* LISTEN
...
udp 0 0 0.0.0.0:53 0.0.0.0:*
...
raw 0 0 0.0.0.0:1 0.0.0.0:* 7
...
In my example, I show a TCP port 6000 opened. This is generally for X11 access (so you can open windows between computers.)
The other port, 53, is a UDP port used by the DNS system. Notice that UDP port are "just opened". You can always send packets to them. You cannot create a client/server connection like you do with TCP/IP. Hence, in this case you do not see the LISTEN state.
The last entry here is "raw". This is a local type of port which only works between processes within one computer. It may be used by processes to send RPC events and such.
Update:
Since then netstat has been somewhat deprecated and you may want to learn about ss instead:
ss -l4n
-- or --
ss -l6n
Unfortunately, at the moment you have to select either -4 or -6 for the corresponding stack (IPv4 or IPv6).
If you're interested in writing C/C++ code or alike, you can read that information from /proc/net/.... For example, the TCP connections are found here:
/proc/net/tcp (IPv4)
/proc/net/tcp6 (IPv6)
Similarly, you'll see UDP files and a Unix file.
Programmatically, if you are only checking one port then you can just attempt a connection. If the port is open, then it will connect. You can then close the connection immediately.
Finally, there is the Kernel direct socket connection for socket diagnostics like so:
int s = socket(
AF_NETLINK
, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK
, NETLINK_SOCK_DIAG);
The main problem I have with that one is that it does not really send you events when something changes. But you can read the current state in structures which is safer than attempting to parse files in /proc/....
I have some code handling such a socket in my eventdispatcher library. Only it still has to do a poll to get the data since the kernel does not generate events on its own (i.e. a push is much better since it only has to happen once when an event actually happens).

UDP auto-discovery for peers on the same machine

I'm looking at ZeroMQ Realtime Exchange Protocol (ZRE) as inspiration for building an auto-discovery of peers in a distributed application.
I've built a simple prototype application using UDP in Python following this model. It seems it has the (obivious, in retrospect) limitation that it only works for detecting peers if all peers are on other machines. This due to the socket bind operation on the discovery port.
Reading up on SO_REUSEADDR and SO_REUSEPORT tells me that I can't exactly do this with the UDP broadcast scheme as described in ZRE.
If you needed to build an auto-discovery mechanism for distributed applications such that multiple application instances (possibly with different versioN) can run on the same machine, how would you build it?
You should be able to bind each server instance to a different address. The entire subnet 127.0.0.0/8 should resolve to your localhost, so you can set up - for example - one service listening on 127.0.0.1, another listening on 127.0.0.2, etc. Anything from 127.0.0.1 to 127.255.255.254.
# works as expected
nc -l 127.0.0.100 3000 &
nc -l 127.0.0.101 3000 &
# shows error "nc: Address already in use"
nc -l 127.0.0.1 3000 &
nc -l 127.0.0.1 3000 &

Resources