How to expose web application on Docker Desktop / Minikube - docker

I deployed simple .NET Core application consisting of 3 pods using Docker Desktop with Kubernetes.
My goals is to expose working web application in my host browser:
My React Application in Fronend pod points in appsettings.js file to Backend Cluster IP Service to fetch data from backend. When im inside the Frontend pod this works normally, fetch works.
Exposed Fronend Pod using NodePort Service, application is accessible but my problem is with failing requests to Backend pod - requests are pointing to Backend ClusterIP service and my host machine cannot resolve them.
How this should be done on single node environment like Docker Desktop or Minikube?
It also should work if I would like to use multiple different backend apis in my frontend app.
Thanks for help!

In your case, both Frontend and Backend need to be accessible from outside the cluster. So the Backend service must be of type NodePort or LoadBalancer.
The reason is that the Frontend does not interact with the Backend Service (in the cluster). The Frontend pod is just used by the browser to download your React application. Then the browser runs your React application. And this application - that does not run in your cluster - must consume the Backend Service. That is why the Backend Service must be externally accessible.

For me easiest way was to use minikube with Ingress Controller -
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

Related

Cannot access the ASP.NET Core app deployed on Kubernetes

I am new to Docker and Kubernetes, and trying to deploy my ASP.Net Core 6.0 web application on Kubernetes with Docker image. I can see the service running with type: NodePort as in the last line of the screenshot 1, but I cannot access this port on my browser at all.
I can also see the Docker container created by Kubernetes Pod running on Docker Desktop Windows application as in screenshot 2, but I don't know how to access my deployed application from the browser. Any suggestion or solution would be appreciated.
Seems to be you need to expose the service , so that it will allow external traffic. In order to expose the service use : kubectl expose deployment <deployment> --type="Loadbalancer"--port=8080, this will create an external IP.
Check the created external IP by using Kubectl get services command.
If not visible, wait for a few minutes to get the service exposed. So, wait for a few minutes and check again the External IP will be visible .
Now access the service using http://<EXTERNAL_IP>:8080in the browser.
For more information Refer to this Lab on how to Deploy ASP.NET Core app on Kubernetes.

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

Service IP & Port discovery with Kubernetes for external App

I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.

Kubernetes - Deploying Multiple Images into a single Pod

I'm having an issue where because an application was originally configured to execute on docker-compose.
I managed to port and rewrite the .yaml deployment files to Kubernetes, however, the issue lies within the communication of the pods.
The frontend communicates with the backend to access the services, and I assume as it should be in the same network, the frontend calls the services from the localhost.
I don't have access to the code, as it is an proprietary application that was developed by a company and it does not support Kubernetes, so modifying the code is out of question.
I believe the main reason is because the frontend and backend are runnning on different pods, with different IPs.
When the frontend tries to call the APIs, it does not find the service, and returns an error.
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Unfortunately, I do not know how to make a yaml file to create both containers within a single pod.
Is it possible to have both frontend and backend containers running on the same pod, or would there be another way to make the containers communicate (maybe a proxy)?
Yes, you just add entries to the containers section in your yaml file, example:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: nginx-container
image: nginx
- name: debian-container
image: debian
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Although you have the accepted answer already in place that is tackling example of running more containers in the same pod I'd like to point out few details:
Containers should be in the same pod only if they scale together (not if you want to communicate over clusterIP amongst them). Your scenario of frontend/backend division doesn't really look like good candidate to cram them together.
If you opt for containers to be in the same pod they can communicate over localhost (they see each other as if two processes are running on the same host (except the part their file systems are different) and can use localhost for direct communication and because of that can't allocate both the same port. Using cluster IP is as if on same host two processes are communicating over external ip).
More kubernetes philosophy approach here would be to:
Create deployment for backend
Create service for backend (exposing necessary ports)
Create deployment for frontend
Communicate from frontend to backend using backend service name (kube-dns resolves this to cluster ip of backend service) and designated backend ports.
Optionally (for this example) create service for frontend for external access or whatever goes outside. Note that here you can allocate same port as for backend service since they are not living on the same pod (host)...
Some of the benefits of this approach include: you can isolate backend better (backend-frontend communication is within cluster only, not exposed to outside world), you can schedule them independently on nodes, you can scale them independently (say you need more backend power but fronted is handling traffic ok or viceversa), you can replace any of them independently etc...

Accessing Kubernetes Web UI (Dashboard)

I have installed a Kubernetes with Kubeadm tool, and then followed the documentation to install the Web UI (Dashboard). Kubernetes is installed and running in one node instance which is a taint master node.
However, I'm not able to access the Web UI at https://<kubernetes-master>/ui. Instead I can access it on https://<kubernetes-master>:6443/ui.
How could I fix this?
The URL you are using to access the dashboard is an endpoint on the API Server. By default, kubeadm deploys the API server on port 6443, and not on 443, which is what you would need to access the dashboard through https without specifying a port in the URL (i.e. https://<kubernetes-master>/ui)
There are various ways you can expose and access the dashboard. These are ordered by increasing complexity:
If this is a dev/test cluster, you could try making kubeadm deploy the API server on port 443 by using the --api-port flag exposed by kubeadm.
Expose the dashboard using a service of type NodePort.
Deploy an ingress controller and define an ingress point for the dashboard.

Resources