How to set up docker with no_proxy settings - docker

I'm setting up three docker containers on my own machine using docker compose:
One is a portal written with React.js (called portal)
One is a middleware layer with GraphQL (called gateway)
One is an auth service with node.js (called auth)
I also have a bunch of services already running behind a corporate firewall.
For the most part, gateway will request resources behind the firewall, so I have configured docker containers to proxy requests through a squid proxy with access to the additional services. However requests to my local auth service, and other local services should not proxied. As such, I have the following docker proxy configuration (note the noProxy settings):
~/.docker/config.json
...
"proxies": {
"default": {
"httpProxy": "http://172.30.245.96:3128",
"httpsProxy": "http://172.30.245.96:3128",
"noProxy": "auth,localhost,127.0.0.1,192.168.0.1/24"
}
}
...
With the above setup, portal requests do go directly to gateway through the browser using http://192.168.0.15/foo, but when gateway makes requests to auth using http://auth:3001/bar, they do not go directly to auth but instead do go through the proxy - which I am trying to avoid.
I can see the auth request is sent through the proxy with the squid proxy errors:
<p>The following error was encountered while trying to retrieve the URL: http://auth:3001/bar</p>
How can I set up the docker containers to respect the noProxy setting using docker service names like auth? It appears to me that the request from gateway to auth is mistakingly being proxed through 172.30.245.96:3128, causing it to not work. Thanks

Your Docker configuration seems fine, but your host doesn't understand how to resolve the name auth. Based on the IP given (192.168.x.x), I'll assume that you're attempting to reach the container service from the host. Add an entry for auth into your host's /etc/hosts (C:\Windows\System32\Drivers\etc\hosts if on Windows).
Take a look at Linked docker-compose containers making http requests for more details.
If you run into issues reaching services from within the container, check docker-compose resolve hostname in url for an example.

Related

Get access to telegram api from docker compose through proxy

We have a server where we run different services in docker containers. The server is on the corporate network and does not have Internet access. We use a proxy to access the Internet. We collect images locally, where we have access to the Internet through a proxy, and then docker pull to our repository
Actually the question is: how to launch two new services using a docker compose that would have access to the Internet using a proxy?
Accordingly, simply trying to connect to a proxy in the code did not help, it failed to connect.
Passing the connection string to the proxy as an environment variable on startup didn't help either.

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

how to deal with changing ips of docker compose containers?

I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.

Using a proxy like squid on linux host with docker containers and bridged network

TLDR
Does anyone have a clue how I have to configure squid, or docker, or ..., so that my docker containers can access internet through my (squid) proxy AND containers on the same network can access each other by their hostnames?
Long question
Following scenario:
There is a corporate proxy
On my linux host, I installed squid, which is configured to ask the parent (corporate) proxy (like explained here https://wiki.squid-cache.org/Features/CacheHierarchy#How_do_I_configure_Squid_forward_all_requests_to_another_proxy.3F)
I want to use docker-compose to start 2 services, which both should be able to access internet through the (squid) proxy and access each others http endpoints via hostname
My docker-compose.yml:
version: '3'
services:
my-backend-service:
image: "backend-service:latest"
networks:
- back-tier
my-frontend-service:
image: "frontend-service:latest"
environment:
- backend.hostname: my-backend-service
networks:
- back-tier
networks:
back-tier:
When the services do not need to access the internet, e.g. APIs, this setup would be ok, as the frontend service can access the backend service by the hostname.
But the backend service needs to access public APIs on the internet and therefor it has to use the proxy.
To fix this, I created following file on my host linux ~/.docker/config.json:
{
"proxies":
{
"default":
{
"httpProxy": "http://MY_HOSTNAME:3128",
"httpsProxy": "http://MY_HOSTNAME:3128"
}
}
}
Side note: I have to use the hostname of my host machine (MY_HOSTNAME), as localhost or 127.0.0.1 are not working. The docker container will not find anything running on localhost on port 3128.
Ok, now my backend service can access APIs in the internet. But my frontend-service can no longer access the backend service by its hostname 'my-backend-service'!
When I start curl http://my-backend-service:8080 on my-frontend-service, I will get an answer from squid saying something about that it is unable to determine IP address from host name...
Question
Does anyone have a clue how I have to configure squid, or docker, or ..., so that my docker containers can access internet through my (squid) proxy AND containers on the same network can access each other by their hostnames?
When the services do not need to access the internet, e.g. APIs, this setup would be ok, as the frontend service can access the backend service by the hostname.
But the backend service needs to access public APIs on the internet and therefor it has to use the proxy.
This should be fairly easy to accomplish. You claim your backend service needs to access the internet. So set the proxy configuration inside that container. The frontend shall only access the backend, so ensure this one has no proxy configuration.
Your problem is that you configured the variables on the host system and it seems to be active inside both containers. If you want to continue with one global setup, you could also leave the proxy settings on the host but need to configure an exception for local services. Do that using the http_proxy, https_proxy and no_proxy variables as described in https://www.xmodulo.com/how-to-configure-http-proxy-exceptions.html.

Jhipster - unable to Use a Gateway app when deploying everything on docker host except the Gateway itself, Mixed Docker and locally deployment

I have some JHipster Spring Microservice and gateway projects. I deployed all of them on a host using docker except the gateway. I started the gateway on another host.
I use Keycloak for OAuth authentication.
Everything works fine when i deploy all of the microservices and databases and Gateways as docker containers on a docker network using docker-compose.
But it doesn't work when i just deploy everything on docker except the gateway.i mean if the gateway resides outside of docker-created network. the motivation for this action is that I just want my UI programmer to up and run the gateway on his own PC, and use microservices which are deployed on server host. Just for ease of UI development in need to up and run this sole gateway using gradle bootRun -Pprod.
I used a technique to assign a separate IP to each container on my docker network. This technique is called Docker MacVLan networking. so that every container in the host have a separate IP address in physical network and each of these containers are visible on other hosts in the network.
the problem is that in normal docker deployment (when gateway is deployed in a docker network in same host) everything works fine. but in my scenario after successful login, every microservice return error 401.
in microservice it says this error:
o.s.s.oauth2.client.OAuth2RestTemplate : Setting request Accept header to [application/json, application/x-jackson-smile, application/cbor, application/*+json]
o.s.s.oauth2.client.OAuth2RestTemplate : GET request for "http://keycloak:9080/auth/realms/jhipster/protocol/openid-connect/userinfo" resulted in 401 (Unauthorized); invoking error handler
n.m.b.s.o.CachedUserInfoTokenServices : Could not fetch user details: class org.springframework.security.oauth2.client.resource.OAuth2AccessDeniedException, Unable to obtain a new access token for resource 'null'. The provider manager is not configured to support it.
p.a.OAuth2AuthenticationProcessingFilter : Authentication request failed: error="invalid_token", error_description="token string here"
it says that your token is invalid. the same mechanism just works when everything is deployed in same host in docker. is it for the Keycloak that prevents the token to validate for external hosts? i personally doubt that , because it didn't prevent me from logging into gateway successfully. and i just checked keycloak. its up by the command -b 0.0.0.0
Please help me up and run a gateway just by gradle bootRun -Pprod.
In summary I could rephrase my question to: i just want the UI Developer be able to test his angular/spring-gateway project in it's own PC while other services are deployed in powerful server using docker (authentication using Keycloak). and it is not possible to deploy those other services on UI developers own PC. how to do it in JHipster?
add server.use-forward-headers=true to your config when using the gateway

Resources