Accessing Kubernetes Web UI (Dashboard) - docker

I have installed a Kubernetes with Kubeadm tool, and then followed the documentation to install the Web UI (Dashboard). Kubernetes is installed and running in one node instance which is a taint master node.
However, I'm not able to access the Web UI at https://<kubernetes-master>/ui. Instead I can access it on https://<kubernetes-master>:6443/ui.
How could I fix this?

The URL you are using to access the dashboard is an endpoint on the API Server. By default, kubeadm deploys the API server on port 6443, and not on 443, which is what you would need to access the dashboard through https without specifying a port in the URL (i.e. https://<kubernetes-master>/ui)
There are various ways you can expose and access the dashboard. These are ordered by increasing complexity:
If this is a dev/test cluster, you could try making kubeadm deploy the API server on port 443 by using the --api-port flag exposed by kubeadm.
Expose the dashboard using a service of type NodePort.
Deploy an ingress controller and define an ingress point for the dashboard.

Related

Cannot access the ASP.NET Core app deployed on Kubernetes

I am new to Docker and Kubernetes, and trying to deploy my ASP.Net Core 6.0 web application on Kubernetes with Docker image. I can see the service running with type: NodePort as in the last line of the screenshot 1, but I cannot access this port on my browser at all.
I can also see the Docker container created by Kubernetes Pod running on Docker Desktop Windows application as in screenshot 2, but I don't know how to access my deployed application from the browser. Any suggestion or solution would be appreciated.
Seems to be you need to expose the service , so that it will allow external traffic. In order to expose the service use : kubectl expose deployment <deployment> --type="Loadbalancer"--port=8080, this will create an external IP.
Check the created external IP by using Kubectl get services command.
If not visible, wait for a few minutes to get the service exposed. So, wait for a few minutes and check again the External IP will be visible .
Now access the service using http://<EXTERNAL_IP>:8080in the browser.
For more information Refer to this Lab on how to Deploy ASP.NET Core app on Kubernetes.

Forward TCP connections through docker container

I have springboot microservice running inside docker container (Kubernetes) which can access unmanaged services (SQL, Elasticsearch, etc), which are not accessible from my laptop directly, so I'm forced to run commands via kubectl to access them. Is there a posibility to forward TCP connections through docker containers to enable direct access to those service, something like ssh port forwarding?
For this you have to create a"service without selector"and defineendpointsfor your "external" resources
Kubernetes doc on such services here
Of course, your service can be of type"NodePort", so with the help of your load balancer in front of OCP, you can access the service from outside your cluster and the service will reach your external resource
Yep, you can use kubectl port-forward to do exactly this. If you'd like to read the documentation it's here.

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

Service IP & Port discovery with Kubernetes for external App

I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.

How to expose dgraph-ratel-public without LoadBalancer in Kubernetes

Whenever I expose a Kubernetes Service as a Load Balancer the external-IP is in forever pending state.
So, I am not able to access the dgraph ratle through my browser.
I needed to expose my Service through NodePort so that I can access it with IP:node-port.
Here I created a NodePort Service for my dgraph ratle public. I can curl IP:node-port and able to get the result but I cannot access it in my web browser.
I'm using Kubernetes on Digital Ocean
Kubernetes version v1.12.
Help me with:
Get pending external-IP or
Exposing the container in public or
What am I missing?
You can't reach private IP addresses over the Internet, so you need to create a Load Balancer in front of your Kubernetes cluster, or some kind of VPN into your cluster.
Kubernetes default cloud controller manager doesn't support DigitalOcean. You can create a Load Balancer for Kubernetes cluster nodes manually, or you need to install an additional cloud-controller-manager for DigitalOcean cloud as it is mentioned in the manual:
Clone the git repo:
$ git clone https://github.com/digitalocean/digitalocean-cloud-controller-manager.git
To run digitalocean-cloud-controller-manager, you need a DigitalOcean personal access token. If you are already logged in, you can create one here. Ensure the token you create has both read and write access.
Once you have a personal access token, create a Kubernetes Secret as a way for the cloud controller manager to access your token. (using script, or manually)
Deploy appropriate version of cloud-controller-manager:
$ kubectl apply -f releases/v0.1.10.yml
deployment "digitalocean-cloud-controller-manager" created
NOTE: the deployments in releases/ are meant to serve as an example. They will work in a majority of cases but may not work out of the box for your cluster.
Cloud Controller Manager current version is: v0.1.10. This means that the project is still under active development and may not be production ready. The plugin will be bumped to v1.0.0 once the DigitalOcean Kubernetes product is released.
Here you can find examples:
Load balancers
Node Features with Digitalocean Cloud Controller Manager

Resources