I have a Python 3 application deployed in Google App Engine, flexible environment.
I'm using psycopg2 to connect to a PostgreSQL instance hosted in Google cloud SQL.
I'm having trouble connecting to PostgreSQL from Google App Engine.
Cloud SQL Proxy seems to initialize ok, but it binds to 0.0.0.0
Listening on 0.0.0.0:5432 for projectID:us-central1:my-db
Trying to connect on 127.0.0.1 or localhost doesn't work. Connection is refused.
What does work is using the docker (app engine flexible environment uses docker underneath) default IP 172.17.0.1 (from the docker0 adapter)
Using that IP address to connect to Cloud SQL seems like it would bite me in the ass if someone decides to change it.
Why is this happening?
Is using the default docker0 adapter's IP address a viable long term solution?
Is there an alternative other than switching to a socket based connection instead of the tcp approach.
It sounds like you are running the Cloud SQL proxy on your host machine, while you are attempting to run your application from inside a container. The reason it can't connect to the proxy is because 127.0.0.1 refers to docker's loopback interface, while the proxy is bound to the host machine's interface. The 172.17.0.1 is the address the container can use to can reach the host interface.
One alternative is to use host networking (https://docs.docker.com/network/host/), by passing in --network host. This will cause the host's interface to be used for the application.
I've switched from using TCP as the connection method and to using a Unix Socket.
The TCP issue seems to be a bug in the app engine flexible environment. But it's a beta feature (it is under the name beta_settings in app.yaml) and I'm not holding out for Google to fix it.
I also don't want to commit to an IP address that could be changed sometime in the future as a workaround.
Related
So I am having a pterodactyl installation on my node,
I am aware that pterodactyl runs using docker so to protect my Backend IP from being exposed when connecting to the servers I am using a GRE Tunnel from X4B.net
After installing the script I was provided by X4B I got this message
Also Note: This script does not adjust the configuration of your applications. You should ensure your applications are bound to 0.0.0.0 or the appropriate tunnel IP.
At first I was confused and tried connecting to my server but nothing worked, so I was thinking that it was due the docker not being bounded to 0.0.0.0
As for the network layout I was provided with:
10.16.1.200/30 Network,
10.16.1.201 Unified Gateway,
10.16.1.202 Bound via NAT to 103.249.70.63,
10.16.1.203 Broadcast
So If I host a minecraft server what IP address would I use?
So I've been trying to host my own Minecraft server for a while now and I hit a snag.
I have proxmox (192.168.2.100) running an ubuntnu server VM (192.168.2.101) which has a docker container running my Minecraft server. I can connect to the server locally just fine using the ubuntu's IP address and minecraft port, but when I try to port forward the server, I can't connect to it. I checked to see if my port was exposed or not and it is so I know it's not that.
This is the container that I'm using
Okay, so after a long time looking at configurations and some help from the proxmox forum, it turns out that my modem doesn't support NAT reflection which means that any attempts to access my server though my public IP on the same network wouldn't work at all. I used my phone's mobile network to test if I could access the server from outside my local network and it worked just fine!
I have a mongodb docker container I only want to have access to it from inside of my server, not out side. even I blocked the port 27017/tcp with firewall-cmd but it seems that docker is still available to public.
I am using linux centos 7
and docker-compose for setting up docker
I resolved the same problem adding an iptables rule that blocks 27017 port on public interface (eth0) at the top of chain DOCKER:
iptables -I DOCKER 1 -i eth0 -p tcp --dport 27017 -j DROP
Set the rule after docker startup
Another thing to do is to use non-default port for mongod, modify docker-compose.yml (remember to add --port=XXX in command directive)
For better security I suggest to put your server behind an external firewall
If you have your application in one container and MongoDb in other container what you need to do is to connect them together by using a network that is set to be internal.
See Documentation:
Internal
By default, Docker also connects a bridge network to it to provide
external connectivity. If you want to create an externally isolated
overlay network, you can set this option to true.
See also this question
Here's the tutorial on networking (not including internal but good for understanding)
You may also limit traffic on MongoDb by Configuring Linux iptables Firewall for MongoDB
for creating private networks use some IPs from these ranges:
10.0.0.0 β 10.255.255.255
172.16.0.0 β 172.31.255.255
192.168.0.0 β 192.168.255.255
more read on Wikipedia
You may connect a container to more than one network so typically an application container is connected to the outside world network (external) and internal network. The application communicates with database on internal network and returns some data to the client via external network. Database is connected only to the internal network so it is not seen from the outside (internet)
I found a post here may help enter link description here. Just post it here for people who needed it in future.
For security concern we need both hardware firewall and OS firewall enabled and configured properly. I found that firewall protection is ineffective for ports opened in docker container listened on 0.0.0.0 though firewalld service was enabled at that time.
My situation is :
A server with Centos 7.9 and Docker version 20.10.17 installed
A docker container was running with port 3000 opened on 0.0.0.0
The firewalld service had started with the command systemctl start firewalld
Only ports 22 should be allow access outside the server as the firewall configured.
It was expected that no one others could access port 3000 on that server, but the testing result was opposite. Port 3000 on that server was accessed successfully from any other servers. Thanks to the blog post, I have had my server under firewall protected.
I am trying to connect my BACNET client which has been containerized and the BACNET server which is running on the host machine. I am using Docker for Windows on Windows 10 (host machine) with Linux containers.
I have tried the following:
a. Publishing the ports 47808 for the client container with the run command.
b. Running the container with network=host, to access services of localhost.
c. Tried specifying the gateway IP as the server's IP address with run command.
d. Running the container in the same subnet as my server
e. Running the container with the host IP specified and the ports published.
My bacnet server, taken from https://sourceforge.net/projects/bacnet/ always connects to the DockerNAT, 10.0.75.1? Any idea why does this happens? The server application is not a container but an executable file.
Server IP:10.0.75.1 (dockerNAT)
Client container running on host machine.
From a quick google:
For Windows containers this component is not used and containers and
their ports are only accessible via the NATed IP address.
With respect to BACnet, this is going to put you in a world of hurt. You will have to use BACnet BBMD with NAT support in your container to achieve this, and your BACnet Client will have to register as a BACnet Foreign Device. The BACnet Stack at SourceForge does seem to have some NAT support (the code seems to be there but I have never tested it in its original form).
So what you are seeing is 'expected', but your solution is going to require that you become much more familiar with BACnet BBMDs than you ever want to be. Read the BACnet specification carefully. Good luck.
I am having the same problem as mentioned here: Cannot access kubernetes service via outside network. I have tried the solution mentioned using Ingress, but without any success.
My pods are up and running, along with my service.
I can curl any of the endpoints successfully from within a pod, but not able to curl from the host.
When I am using Ingress, the address field shows blank, and while trying to curl the hostname, it shows Could not resolve host.
I am using Kubernetes on Docker Edge, on a MacBook Pro.
How do I curl the service endpoint from the host?
First of all, please note that Kubernetes on MacOS runs separate virtual machine
to run Docker containers and Kubernetes as well. It is important to
understand that you can have a problem connecting from MacOS to some Kubernetes
resources. TCP connections are not realized in the same way they are in the cloud environment.
It depends on the configuration of the internetworking between MacOS and the VM where Kubernetes stack
is running.
(NAT, bridge, host only connection)
I suppose that you chose a NodePort Service and in this kind of configuration,
you need to know both: the IP address of a node and the port where Kubernetes started to listen to
the incoming connection. Ingress, in this case, analyses a host http header to determine
a route of the traffic. Itβs similar to the Service created on type:NodePort. You need to call
a proper Ingress service. It is not obvious that service is listening on Well-Known Port.
In fact, It is a bit tricky, and it may not be easy to connect from MacOS to type:NodePort service
without knowing where did Kubernetes create a listening socket, and be sure that MacOS is actually
supporting routes to this TCP port from MacOS to VM.