How to access localhost service with http proxy on docker network - docker

So I have a setup with a react service running in a docker-compose service and on a network in that compose. For that react service I use the http-proxy-middleware to be able to just use relative endpoints (/api/... instead of localhost:xxxx/api/...) both in development and in production but also because one of the libraries that I depend on requires it (for the same reason).
I also have a python flask backend that I want to run on the localhost network to be able to avoid restarting the entire docker-compose on every change.
Currently, the proxy (as expected I suppose) gives a "ECONNREFUSED" error when used as it cannot connect to the backend.
Does anyone have an idea of how I could get the proxy to be able to access the backend without having to run the backend in the docker-compose?
Thanks in advance, Vidar

So I finally got it working, with help from #Hikash, by setting my frontend proxy to connect to the localhost through the IP I get from ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+'.

Related

How to determine forwarded-allow-ips for uvicorn server from docker network

I use Traefik as proxy and also have react frotend and fastAPI as backend. My backend is publicly exposed. The problem is unvicorn server redirects you from non trailing slash to url with a slash https://api.mydomain.com/posts -> http://api.mydomain.com/posts/ but it doesn't work for ssl. So, I'm getting errors in frontend about CORS and mixing content. Based on this topic FastAPI redirection for trailing slash returns non-ssl link I added --forwarded-allow-ips="*" to uvicorn server and now ssl redirection works, but as I understand it's not secure. I tried to add --forwarded-allow-ips="mydomain.com" but it doesn't work, I have no idea why as "mydomain.com" is an ip of the server so then ip of my proxy. I assume that's because my api get proxy IP from docker network, don't know how to solve this.
If you're positive that the Uvicorn/Gunicorn server is only ever accessible via an internal IP (is completely firewalled from the external network such that all connections come from a trusted proxy [ref]), then it would be okay to use --forwarded-allow-ips="*".
However, I believe the answer to your original question would be to inspect your Docker container(s) and grab their IPs. Since traefik is routing/proxying requests, you probably just need to grab the traefik container ID and run an inspect:
~$ docker ps`
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ids_here traefik:v2.3 "/entrypoint.sh --pr…" 1 second ago Up 1 second 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp traefik
~$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ids_here
172.18.0.1
Try whitelisting that IP.
From what I've read, it is not possible to whitelist a subnet inside --forwarded-allow-ips="x". If your server re-starts, your container's internal IPs are subject (likely) to change. A hack-ish fix for this would be forcing the container start order using depends_on. The depends_on container will start first and have a lower/first IP assignment.
However, it is possible in Docker like this [ref]:
labels:
- "traefik.http.middlewares.testIPwhitelist.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.7"
- "traefik.http.middlewares.testIPwhitelist.ipwhitelist.ipstrategy.depth=2"

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

How to make a Docker container's service accessible via the container's IP address?

I'm a bit confused. Trying to run both a HTTP server listening on port 8080 and a SSH server listening on port 22 inside a Docker container I managed to accomplish the latter but strangely not the former.
Here is what I want to achieve and how I tried it:
I want to access services running inside a Docker container using the IP address assigned to the container:
ssh user#172.17.0.2
curl http://172.17.0.2:8080
Note: I know this is not how you would configure a real web server but I want the container to mimic an embedded device which runs both services and which I don't have available all the time. (So it's really just a local non-production thing with no security requirements).
I didn't expect integrating the SSH server to be easy, but to my surprise I just installed and started it and had to do nothing else to be able to connect to the machine via ssh (no EXPOSE 22 or --publish).
Now I wanted to access the container via HTTP on port 8080 and fiddled with --publish and EXPOSE but only managed to make the HTTP server available through localhost/127.0.0.1 on the host. So now I can access it via
curl http://127.0.0.1:8080/
but I want to access both services via the same IP address which is NOT localhost (e.g. the address the container got randomly assigned is totally OK for me).
Unfortunately
curl http://172.17.0.2:8080/
waits until it times out every time I tied it.
I tried docker run together with -p 8080, -p 127.0.0.1:8080:8080, -p 172.17.0.2:8080:8080 and much more combinations, together or without EXPOSE 8080 in the Dockerfile but without success.
Why can I access the container via port 22 without having exposed anything?
And how do I make it accessible via the container's IP address?
Update: looks like I'm experiencing exactly what's described here.

Can't connect to JHipster UAA server using Curl

I am running a docker-compose'd architecture with a registry, gateway (8080), uaa (9999), and 2 microservices (8081 and 8082) and I can see the Swagger API in the gateway app via dropdown selection. I can login to the gateway with admin and user. I've also modified to the code to accept an owner, agent, and monitor role. I can login just fine.
In a terminal I tried Baeldung's curl command (Blog posting) to get a token from the uaa server directly for testing APIs.
[~]$ curl -X POST --data "username=user&password=user&grant_type=password&scope=openid" http://localhost:9999/oauth/token
curl: (7) Failed to connect to localhost port 9999: Connection refused
I opened Kitematic and the uaa server is localhost (host) and 9999 (port) in the docker container log.
Can someone help me figure out why Curl is not working for me?
thanks,
David
This issue is almost certainly related to the network properties of the stack that you are deploying.
If you are issuing the curl command from the host machine to http://localhost:9999, then you need to make sure that the UAA server is mapping it's port to the host.
Does your UAA service have this in the docker-compose.yml?
ports:
- "9999:9999"
If not, you need to add it in order to test it from the host.
By default, docker-compose will create a bridge network for your stack, where your containers can talk to eachother and resolve eachother on container names. But from the host, you will not be able to address the containers unless you explicitly map their exposed ports to the ports on the host.

Docker communication between apps in separate containers

I have been looking everywhere for this answer. To me it seems like an obvious question, however, the answer has eluded me.
My current setup is, I have redis, mongodb and two api servers on the same bridge network. The first server serves as a gateway api that does all the auth, and exposes certain api calls. The backend api is the one that handles all the db interactions and data munging. If I hit the backend (inner) api alone, I am able to see the contents (this api would not be exposed in real production environment). However, if I make the same request from within the gateway api, I am not able to hit the backend (inner) api that is also part of the bridged network I created.
Below is a diagram of the container interactions.
I still use legacy linking, but I'm a little bit familiar with this. I think the problem is that you are trying to hit "localhost" from inside your gateway container. The inner API container cannot be resolved as "localhost" inside of the gateway API container. You are able to hit "localhost:8099" from the host machine or externally because of the port mapping, but none of your other containers will be able to resolve that address/port because they 'think' it's a remote machine.
Here's a way to test what I'm thinking. In your host's shell, run the bridge inspect command shown here. Copy the IP address from Containers.<inner-api-hash>.IPV4. Then open a shell in the gateway container with docker exec -it <gateway-id> /bin/bash and then use curl or wget to see if you can hit that IP address you copied.
If my thinking is correct, you will see that you must use your inner-API node's Docker assigned IP address from the other containers. Amongst other options, you can start containers with a static IP address as shown here.
This is starting to escape the scope of my knowledge, but you can also configure a container DNS. Configure container DNS.

Resources