I am currently using Ubuntu 10.04 for some rails development. It is installed as a guest machine using VirtualBox on a Windows 7 x64 host.
Within Ubuntu, I am trying to port tunnel several ports from a remote server directly to the Guest OS in order to avoid having to download a remote database.
Let's say I want to forward port 5000 on the remote server to port 5000 on the guest os.
I have set up a forwarder for the port on the Windows side, using VBoxManage.exe. This forwards HostPort 5000 to GuestPort 5000.
Then within ubuntu I run, ssh -L5000:127.0.0.1:5000. However, whenever I try to access "127.0.0.1:5000", I receive the message "channel 7: open failed: connect failed: Connection refused"
Am I missing something?
Thanks for the help!
connect failed: Connection refused
This means that you'r not able to connect to 5000 on the remote end.
If you'r only using this connection from within your guest through your SSH tunnel then you don't need the forward from VBoxManager, as this will open op so that outside computers can connect directly to your guest, it won't help your guest connect to the outside.
Are you sure the server you connect (SSH) to is the same server that runs your database? And is the database running on that server?
When you've connected (SSH) to the server, you can try to list what ports are listening for connections or you could try to connect to the database with telnet. To list listeners you can run "netstat -lnt" (-l shows listening, -n is numeric (show IP and port number) and -t is tcp). You should have a line like "tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN" if you have a service listening for TCP on port 5000. To try and connect you can simply do "telnet 127.0.0.1 5000", if you can't connect with telnet from the server then the database ain't listening/allowing your connection, or the server is running on another port or server.
SSH uses TCP traffic by default, right?
Just to verify, NAT in VirtualBox does have these limitations (per the User Manual):
There are four limitations of NAT mode which users should be aware of:
ICMP protocol limitations: Some frequently used network debugging tools (e.g. ping or tracerouting) rely on the ICMP protocol for sending/receiving messages. While ICMP support has been improved with VirtualBox 2.1 (ping should now work), some other tools may not work reliably.
Receiving of UDP broadcasts is not reliable: The guest does not reliably receive broadcasts, since, in order to save resources, it only listens for a certain amount of time after the guest has sent UDP data on a particular port. As a consequence, NetBios name resolution based on broadcasts does not always work (but WINS always works). As a workaround, you can use the numeric IP of the desired server in the \server\share notation.
Protocols such as GRE are unsupported: Protocols other than TCP and UDP are not supported. This means some VPN products (e.g. PPTP from Microsoft) cannot be used. There are other VPN products which use simply TCP and UDP.
Forwarding host ports lower than 1024 impossible: On Unix-based hosts (e.g. Linux, Solaris, Mac OS X) it is not possible to bind to ports below 1024 from applications that are not run by root. As a result, if you try to configure such a port forwarding, the VM will refuse to start.
Try ssh -L5000:0.0.0.0:5000 instead of ssh -L5000:127.0.0.1:5000
There is something called a "loopback" that is tangled up with 127.0.0.1 that will cause you grief if trying to access ports from a different machine. I.e. your host machine.
Related
I have an Azure Container App running and is listening on a public TCP port 8000 (via the load balancer) for incoming connections. When incoming connections are arriving, I serve them with data and everything goes as expected.
My problem is when I stop the server listening on that port. In that case, a client application trying to connect to my public IP address at port 8000 would expect to get an error like 'Could not connect' but this is not happening. What is in fact happening is that the Container Apps environment seem to be forwarding the data no mater what to that port (even if there is no server listening). As such, the client connecting to that port can't understand that the server that should be listening to that port is really stopped (in order to resend the data at a later time).
Example:
Open a TCP client (eg. PacketSender) and try to send some data to port 6000 on your localhost. You should receive a 'Could not connect' error message.
Now, in docker run the following:
docker run -p 6000:6000 nginxdemos/hello:plain-text
Try again to send some data to port 6000 via a TCP client. This time the data will be sent even though the nginxdemos container doesn't listen to port 6000 (but probably on 80).
Is it any way that I can somehow solve that issue on the server side and ensure that the clients can't connect if the server is stopped? I have devices sending thousands of data on a Container App but because they do not expect any kind of an ACK, they think that the data have been transmitted (even though they haven't) and they don't try to resend them.
Not sure about the docker example, it probably depends on how docker on that system implements port forwarding.
In Azure ContainerApps: no, this is not possible. There is always some component listening on the port, even if your application is not running or is restarting, provisioning, scaling, etc. The connection will be buffered until the app starts listening on the port or it times out.
So I am having a pterodactyl installation on my node,
I am aware that pterodactyl runs using docker so to protect my Backend IP from being exposed when connecting to the servers I am using a GRE Tunnel from X4B.net
After installing the script I was provided by X4B I got this message
Also Note: This script does not adjust the configuration of your applications. You should ensure your applications are bound to 0.0.0.0 or the appropriate tunnel IP.
At first I was confused and tried connecting to my server but nothing worked, so I was thinking that it was due the docker not being bounded to 0.0.0.0
As for the network layout I was provided with:
10.16.1.200/30 Network,
10.16.1.201 Unified Gateway,
10.16.1.202 Bound via NAT to 103.249.70.63,
10.16.1.203 Broadcast
So If I host a minecraft server what IP address would I use?
I have created a VM instance with windows OS (windows-server-2019-dc-v20200211) in Google cloud.Established RDP connection and installed Jenkins on the VM, but how can I access it from other networks using the VM's external ip?
Could someone help me on this!!
Note: I want to install Jenkins in windows server and not on Linux.
I'll suggest you should check the following:
First, make sure your local firewall on windows server is enabled and allows connections on port 8080. Secondly, Network ACL for both incoming traffic on TCP 8080 and outcoming traffic on TCP port 8080 should be allowed.
Also check some of these stackoverflow use cases for more help: [1]https://superuser.com/questions/1212645/cannot-expose-jenkins-externally [2]https://apple.stackexchange.com/questions/31376/how-can-i-open-port-8080-of-mac-os-x-lion [3] Jenkins server is not accessible by host name (ip address)
I have a Python 3 application deployed in Google App Engine, flexible environment.
I'm using psycopg2 to connect to a PostgreSQL instance hosted in Google cloud SQL.
I'm having trouble connecting to PostgreSQL from Google App Engine.
Cloud SQL Proxy seems to initialize ok, but it binds to 0.0.0.0
Listening on 0.0.0.0:5432 for projectID:us-central1:my-db
Trying to connect on 127.0.0.1 or localhost doesn't work. Connection is refused.
What does work is using the docker (app engine flexible environment uses docker underneath) default IP 172.17.0.1 (from the docker0 adapter)
Using that IP address to connect to Cloud SQL seems like it would bite me in the ass if someone decides to change it.
Why is this happening?
Is using the default docker0 adapter's IP address a viable long term solution?
Is there an alternative other than switching to a socket based connection instead of the tcp approach.
It sounds like you are running the Cloud SQL proxy on your host machine, while you are attempting to run your application from inside a container. The reason it can't connect to the proxy is because 127.0.0.1 refers to docker's loopback interface, while the proxy is bound to the host machine's interface. The 172.17.0.1 is the address the container can use to can reach the host interface.
One alternative is to use host networking (https://docs.docker.com/network/host/), by passing in --network host. This will cause the host's interface to be used for the application.
I've switched from using TCP as the connection method and to using a Unix Socket.
The TCP issue seems to be a bug in the app engine flexible environment. But it's a beta feature (it is under the name beta_settings in app.yaml) and I'm not holding out for Google to fix it.
I also don't want to commit to an IP address that could be changed sometime in the future as a workaround.
I have a server running inside a docker container, listening on UDP port, let's say 1234. This port is exposed in Dockerfile.
Also, I have an external server helping with NAT traversal, basically, just sending addresses of the registered server and a client to each other, and allowing to connect to a server by the name it sent during registration.
Now, if I run my container with -P option, my port is getting published as some random port, e.g. 32774. But on the helper server I see my server connected to it from port 1234, and so it can't send a correct address to a client. And a client can't connect at all.
If I run my container explicitly publishing my server on the same port with -p 1234:1234/udp, a client can connect to my server directly. But now on the helper server I see my server connected to it from port 1236, and again it can't send the correct port to a client.
How can this be resolved? My aim is to require as little addition configuration as possible from people who will use my docker image.
EDIT: So, I need either to know my external port number from inside the container to send it to the discovery server, which, as I understand, not possible at the moment, right? Or I need to make outgoing connections from the container and my port to use the same external port as configured for incoming connections - is that possible?
The ports are managed by docker and the docker network adaptor. When using solely -P then the port is exposed docker internally and accessible through docker linking. When using "1234:1234" then the port is mapped on a host port and directly available for a client and also available for linking.
Start the helper server with a link option "--link server container/name". The helper server will connect to host "server" on port 1234. The correct ip address will be managed by docker.
Enable docker to change your iptables configuration, which is docker default. Afterwards the client should be able to connect to both instances. Note that the helper server should provide the host ip and not the docker container ip address. The docker container ip address does only work inside the host where the docker network adapter is running.