I might not fully understand this but I shall try and explain. My aim is to disable the urls that are generated for services and container endpoints in docker cloud.
It is my understanding that unless you publish the port only other local containers on the host can access that port. However Docker Cloud creates a service endpoint url for every port exposed by the docker containers as defined in their dockerfile. I do not like the idea that some of my containers which i expect to be accessible only locally are exposed publicly via Docker Clouds url.
I understand that the service urls are generated, however they are still accessible publicly.
Can I disable the automatic generation of service endpoints, or have them locked down behind authentication hence only I can log in, (I would prefer the former).
Any clarity on this subject would be appreciated.
The Publishing of the ports was recently enhanced and now it's up to you if the ports are published to the internet.
Related
I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?
I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.
We're using docker for our project. We've a monitoring service (for our native application) which is running on Docker.
Currently there is no user management for this monitoring service. Is there any way we can add user management from Dockerfile?
Please note that I'm not looking docker container user management.
In simple words functionality that I'm looking for is:
Add any user and password in dockerfile.
While accessing external IP, same user and password must be provided to view running monitoring service.
From your few information about your setup, I would say the authentification should be handled by your monitoring service. If this was a kind of webapp you could use a simple basic auth as a first step.
Docker doesn't know what's running in your container, and its networking setup is limited to a simple pass-through between containers or from host ports to containers. It's typical to run programs with many different network protocols in Docker (Web servers, MySQL, PostgreSQL, and Redis all have different wire protocols) and there's no way Docker on its own could inject an authorization step.
Many non-trivial Docker setups include some sort of HTTP reverse proxy container (frequently Nginx, occasionally Apache) that can both serve a Javascript UI's built static files and also route HTTP requests to a backend service. You can add an authentication control there. The mechanics of doing this would be specific to your choice of proxy server.
Consider that any information you include in the Dockerfile or docker build --args options can be easily retrieved by anyone who has your image (looking at its docker history, or docker run a debug shell on it). You may need to inject the credentials at run time using a bind mount if you think they're sensitive and you're not storing your image somewhere with strong protections (if anyone can docker pull it from Docker Hub).
I have a web application running in a docker container on production server. Now I need to make API requests to this application. So, I have two possibilities:
1) Link a domain
2) Make requests directly by IP
I'm using a cloud server for that. In my previous experience I linked the domain to a folder. But now I don't know how to link the domain to a running container on ip_addr:port.
I found this link
https://docs.docker.com/v17.12/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/
but it's for docker enterprice. Using of that is impossible for the moment.
To expose a docker application to the public without using compose or other orchestration tools like Kubernetes, you can use the docker run -p hostPort:containerPort option to expose your container port. Make sure your application is listening on 0.0.0.0:[container port] inside your container. To access the service externally, you would use the host's IP, and the port that the container port has been mapped to.
See more here
If you want to link to a domain, you can update your DNS records to point your domain to your host IP address.
Hope this helps!
Best way is to use kubernetes because it will ease many operations. But docker-compose can also be used.
If you want to simply deploy using docker it can be done by mapping hostPort to containerPort.
I have been looking everywhere for this answer. To me it seems like an obvious question, however, the answer has eluded me.
My current setup is, I have redis, mongodb and two api servers on the same bridge network. The first server serves as a gateway api that does all the auth, and exposes certain api calls. The backend api is the one that handles all the db interactions and data munging. If I hit the backend (inner) api alone, I am able to see the contents (this api would not be exposed in real production environment). However, if I make the same request from within the gateway api, I am not able to hit the backend (inner) api that is also part of the bridged network I created.
Below is a diagram of the container interactions.
I still use legacy linking, but I'm a little bit familiar with this. I think the problem is that you are trying to hit "localhost" from inside your gateway container. The inner API container cannot be resolved as "localhost" inside of the gateway API container. You are able to hit "localhost:8099" from the host machine or externally because of the port mapping, but none of your other containers will be able to resolve that address/port because they 'think' it's a remote machine.
Here's a way to test what I'm thinking. In your host's shell, run the bridge inspect command shown here. Copy the IP address from Containers.<inner-api-hash>.IPV4. Then open a shell in the gateway container with docker exec -it <gateway-id> /bin/bash and then use curl or wget to see if you can hit that IP address you copied.
If my thinking is correct, you will see that you must use your inner-API node's Docker assigned IP address from the other containers. Amongst other options, you can start containers with a static IP address as shown here.
This is starting to escape the scope of my knowledge, but you can also configure a container DNS. Configure container DNS.