Service IP & Port discovery with Kubernetes for external App - docker

I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?

Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.

Related

How to expose web application on Docker Desktop / Minikube

I deployed simple .NET Core application consisting of 3 pods using Docker Desktop with Kubernetes.
My goals is to expose working web application in my host browser:
My React Application in Fronend pod points in appsettings.js file to Backend Cluster IP Service to fetch data from backend. When im inside the Frontend pod this works normally, fetch works.
Exposed Fronend Pod using NodePort Service, application is accessible but my problem is with failing requests to Backend pod - requests are pointing to Backend ClusterIP service and my host machine cannot resolve them.
How this should be done on single node environment like Docker Desktop or Minikube?
It also should work if I would like to use multiple different backend apis in my frontend app.
Thanks for help!
In your case, both Frontend and Backend need to be accessible from outside the cluster. So the Backend service must be of type NodePort or LoadBalancer.
The reason is that the Frontend does not interact with the Backend Service (in the cluster). The Frontend pod is just used by the browser to download your React application. Then the browser runs your React application. And this application - that does not run in your cluster - must consume the Backend Service. That is why the Backend Service must be externally accessible.
For me easiest way was to use minikube with Ingress Controller -
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

how to deal with changing ips of docker compose containers?

I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.

rancher 2.0 networking in project namespace

can i ping one workload from other workload by workloadname?
I accustomed on rancher 1.0, where if i created stack with more conteiner so i can ping one from other conteiner by name.
for example: I have api and database and I need api to communicate with databases. When i click on execute shell on api and write "ping database", so not working.
I write connection string to database in api environmental variable.
And YES i can create database and take database ip a write it to ENV, but this ip will change after each restart.
It's possible to call by some not generate name?
thanks
EDIT:
Service discovery:
Shell:
As you see, so translate database name is work. Only ping database container not working.
To communicate between services you can communicate with cluster IP or with Service Name.
Using the ServiceName will be easier.
The service discovery add a DNS for each of your service. So if you have api, app and database you will have a DNS entry for each of those services.
So within your services, you can refer directly to the DNS.
Example: To connect in JDBC to a schema name test in your database, you would do something like this:
jdbc:mysql://database/test
see:
https://rancher.com/docs/rancher/v2.x/en/k8s-in-rancher/service-discovery/
If you want to know the clusterIP of you services you can run this command: kubectl get services --all-namespaces
Edit 1: Adding ClusterIP as a way to communicate with a service.
Kubernetes Service IP is implemented using "iptables" on the linux hosts which are part of the cluster. If you examine those rules closely, ONLY the port specified as part of the Service is exposed, not the ICMP port, which means, one cannot ping the Service IP addresses by default. But you would still be able to communicate with the Service on the designated port.

Cannot curl Kubernetes service IP from the host

I am having the same problem as mentioned here: Cannot access kubernetes service via outside network. I have tried the solution mentioned using Ingress, but without any success.
My pods are up and running, along with my service.
I can curl any of the endpoints successfully from within a pod, but not able to curl from the host.
When I am using Ingress, the address field shows blank, and while trying to curl the hostname, it shows Could not resolve host.
I am using Kubernetes on Docker Edge, on a MacBook Pro.
How do I curl the service endpoint from the host?
First of all, please note that Kubernetes on MacOS runs separate virtual machine
to run Docker containers and Kubernetes as well. It is important to
understand that you can have a problem connecting from MacOS to some Kubernetes
resources. TCP connections are not realized in the same way they are in the cloud environment.
It depends on the configuration of the internetworking between MacOS and the VM where Kubernetes stack
is running.
(NAT, bridge, host only connection)
I suppose that you chose a NodePort Service and in this kind of configuration,
you need to know both: the IP address of a node and the port where Kubernetes started to listen to
the incoming connection. Ingress, in this case, analyses a host http header to determine
a route of the traffic. It’s similar to the Service created on type:NodePort. You need to call
a proper Ingress service. It is not obvious that service is listening on Well-Known Port.
In fact, It is a bit tricky, and it may not be easy to connect from MacOS to type:NodePort service
without knowing where did Kubernetes create a listening socket, and be sure that MacOS is actually
supporting routes to this TCP port from MacOS to VM.

Resources