Kubernetes hello-minikube Tutorial - Can't Connect to Pod - docker

Apologies if this is a really simple question - I am following the hello-minikube the tutorial on the Kubernetes link below (running on Mac OS)
Minikube tutorial
I created a deployment on port 8380 as 8080 is in use,
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.100.248.81 <none> 8380/TCP 11s
I also exposed the deployment, but when I try to curl or open the app URL I get connection refused.
Failed to connect to localhost port 8380: Connection refused
Also if I specify --type=LoadBalancer during the expose step - that also fails to connect.
Any help would be much appreciated.

I've recreated all the steps from the tutorial you have mentioned.
Your error only occurs when you do not change the port from 8080 to 8380, in one of the steps provided in the documentation. After you change it in all 3 places, it works perfectly fine.
What I suggest is checking if you changed the port in the server.js file - as it is used by the Dockerfile in build phase:
var www = http.createServer(handleRequest);
www.listen(8080); #->8380
Then in the Dockerfile in EXPOSE 8080 # -> 8380.
And the last place is while running the deployment:
kubectl run hello-node --image=hello-node:v1 --port=8380 --image-pull-policy=Never
I've tested this with --type=LoadBalancer.

Related

Cannot access NodePort service outside Kubernetes cluster

I am on Windows and used Docker Desktop to deploy a local Kubernetes cluster using WSL 2. I tried to deploy a pod and expose it through a NodePort service so I could access it outside the cluster, but it is not working.
Here are the commands to reproduce the scenario:
kubectl create deployment echoserver --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment echoserver --type=NodePort --port=8080
Trying to open NODE_IP:EXPOSED_PORT in the browser or running the netcat command nc NODE_IP EXPOSED_PORT and trying to send a message (from either WSL or Windows) does not work.
NODE_IP is the internal IP of the Docker Desktop K8S node (obtained by seeing the INTERNAL-IP column on the command kubectl get nodes -o wide)
EXPOSED_PORT is the node port exposed by the service (obtained by seeing the field NodePort on command kubectl describe service echoserver)
Opening the URL on the browser should be met with this page. However, you will get a generic error response saying the browser couldn't reach the URL.
Sending a message with the netcat command should be met with a 400 Bad Request response, as it will not be a properly formatted HTTP request. However, you will not get any response at all or the TCP connection may not even be made in the 1st place.
Trying to communicate with the service and/or the pod from inside the cluster, for example, through another pod, works perfectly.
Using the command kubectl port-forward deployment/echoserver 2311:8080 to port forward the deployment locally and then either accessing localhost:2311 in the browser or through netcat also work perfectly (in both WSL and Windows).
If you want to access it not using localhost you should use your <windows_hosts's_IP:NodePort>.
So having your deployment and service deployed:
$kubectl get svc,deploy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/echoserver NodePort 10.105.169.2 <none> 8080:31570/TCP 4m12s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m3s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/echoserver 1/1 1 1 4m19s
You can either access it by using localhost:31570 or <windows_hosts's_IP:NodePort>.
In my case 192.168.0.29 is my Windows host's IP:
curl.exe 192.168.0.29:31570
CLIENT VALUES:
client_address=192.168.65.3
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.0.29:8080/

Container docker & Kubernetes apache tomcat 8.5.56 http status 404

please I'm running a .war application on apache tomcat 8.5.56 in a docker container and everything work well, but when I create deploy the container on Kubernetes I can access my application welcome page: I have the error message
HTTP Status 404 – Not Found
Type Status Report
Message The requested resource [/SmartClass] is not available
Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
Apache Tomcat/8.5.56
Please anyone knows how to solve it?
For the deployment I have just copied the .war file into
/opt/apache-tomcat/webapps/ and I have copied my server.xml file into /opt/apache-tomcat/conf/
It looks like the problem is related to the connection to the application.
Create a Service object that exposes your Tomcat deployment:
kubectl expose deployment tomcat-example --type=NodePort --name=example-service
Display information about the Service:
kubectl describe services example-service
The output is similar to this:
Name: example-service
Namespace: default
Labels: run=lexample
Annotations: <none>
Selector: run=example
Type: NodePort
IP: 10.32.0.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30000/TCP
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
Session Affinity: None
Events: <none>
Make a note of the NodePort value for the service. For example, in the preceding output, the NodePort value is 30000.
List the pods that are running the Tomcat application:
kubectl get pods --selector="run=example" --output=wide
The output is similar to this:
NAME READY STATUS ... IP NODE
tomcat-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1
tomcat-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2
Get the public IP address of one of your nodes that is running a Tomcat pod. How you get this address depends on how you set up your cluster. For example, if you are using Minikube, you can see the node address by running kubectl cluster-info. If you are using Google Compute Engine instances, you can use the gcloud compute instances list command to see the public addresses of your nodes.
On your chosen node, create a firewall rule that allows TCP traffic on your node port. For example, if your Service has a NodePort value of 31568, create a firewall rule that allows TCP traffic on port 30000. Different cloud providers offer different ways of configuring firewall rules.
Use the node address and node port to access the Hello World application:
curl http://<public-node-ip>:<node-port>
where <public-node-ip> is the public IP address of your node, and <node-port> is the NodePort value for your service.
Please adjust above command according to proper names and values you have used.

Kubernetes: Frontend-Pod cannot resolve dns of Backend-Service (using Minikube)

I am learning Kubernetes and i run into trouble reaching an API in my local Minikube (Docker driver).
I have a pod running an angluar-client which tries to reach a backend pod. The frontend Pod is exposed by a NodePort Service. The backend pod is exposed to the Cluster by a ClusterIP Service.
But when i try to reach the clusterip service from the frontend the dns transpile-svc.default.svc.cluster.local cannot get resolved.
error message in the client
the dns should work properly. i followed this https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ and deployed a dnsutils pod from where i can nslookup.
winpty kubectl exec -i -t dnsutils -- nslookup transpile-svc.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: transpile-svc.default.svc.cluster.local
Address: 10.99.196.82
This is the .yaml file for the clusterIP Service
apiVersion: v1
kind: Service
metadata:
name: transpile-svc
labels:
app: transpile
spec:
selector:
app: transpile
ports:
- port: 80
targetPort: 80
Even if i hardcode the IP into the request of the frontend i am getting an empty response.
I verified, that the backend pod is working correctly and when i expose it as a NodePort i can reach the api with my browser.
What am i missing here? Im stuck with this problems for quite some time now and i dont find any solution.
Since your frontend application is calling your application from outside the cluster you need to expose your backend application to outside network too.
There are two ways: either expose it directly by changing transpile-svc service to loadbalancer type or introduce an ingress controller(eg Nginx ingress controller with an Ingres object) which will handle all redirections
Steps to expose service as loadbalancer in minikube
1.Change your service transpile-svc type to LoadBalancer
2.Run command minikube service transpile-svc to expose the service ie an IP will be allocated.
3. Run kubectl get services to get external IP assigned. Use IP:POST to call from frontend application
DNS hostnames *.*.svc.cluster.local is only resolvable from within the kubernetes cluster. You should use http://NODEIP:NODEPORT or url provided by minikube service transpile-svc --url in the frontend javascript code which is running in a browser outside the kubernetes cluster.
If the frontend pod is nginx then you can configure the backend service name as below in the nginx configuration file as described in the docs
upstream transpile {
server transpile;
}
server {
listen 80;
location / {
proxy_pass http://transpile-svc;
}
}

How to make kubernetes pod have access to PostgreSQL Pod

I am trying local Kubernetes(Docker-on-mac), and trying to submit a spark job. The spark job, connects with a PostgreSQL database and do some calculations.
The PostgreSQL is running on my Kube and since I have published it, I can access it from the host via localhost:5432. However, when the spark application is trying to connect to PostgreSQL, it throws
Exception in thread "main" org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
kubectl cluster-info
Kubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubectl get service postgresql-published
kubectl describe service spark-store-1588217023181-driver-svc
Name: spark-store-1588217023181-driver-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: spark-app-selector=spark-533ecb8556b6439eb938d487cc77c330,spark-role=driver
Type: ClusterIP
IP: None
Port: driver-rpc-port 7078/TCP
TargetPort: 7078/TCP
Endpoints: <none>
Port: blockmanager 7079/TCP
TargetPort: 7079/TCP
Endpoints: <none>
Session Affinity: None
How can I make my spark job, have access to PostgreSQL service?
localhost is there in EXTERNAL_IP but Kubernetes cluster DNS system(CoreDNS) does not know how to resolve it to an IP address.EXTERNAL_IP is supposed to be resolved by an external DNS server and it's generally meant to be used to connect to Postgres from outside the Kubernetes cluster(i.e from another system or from Kubernetes nodes as well) and not from the inside the cluster(i.e from another pod)
Postgres should be accessible from spark pod via 10.106.15.112:5432 or postgresql-published:5432 because kubernetes cluster DNS system knows how to resolve it.
Test the Postgres connectivity
kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=<HERE_YOUR_PASSWORD>" --command -- psql --host <HERE_HOSTNAME=SVC_OR_IP> -U <HERE_USERNAME>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORTS
postgresql-published LoadBalancer 10.106.15.112 localhost 5432:31277
Means that the service shall be accessible within the cluster at 10.106.15.112:5432 , postgresql-published:5432 and externally at localhost:31277.
Please note that for the Pod the localhost is the Pod itself. In this very case localhost looks ambiguous. However that is how the expose works.

Kubernetes, Flannel and exposing services

I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:
kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16
flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}
docker subnet setting: --bip=10.0.0.1/24
Hostnode IP: 192.168.4.57
I've got the nginx service running and I've tried to expose it like so:
[root#kubemaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-px6uy 1/1 Running 0 4m
[root#kubemaster ~]# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S) AGE
kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m
nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m
and then I exposed the service like this:
kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME LABELS SELECTOR IP(S) PORT(S) AGE
nginx run=nginx run=nginx 9000/TCP 292y
I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(
Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?
So the issue here was that there's a missing piece of the puzzle when you use nodePort.
I was also making a mistake with the commands.
Firstly, you need to make sure you expose the right ports, in this case 80 for nginx:
kubectl expose rc nginx --port=80 --type=NodePort
Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node:
[root#kubemaster ~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: run=nginx
Selector: run=nginx
Type: NodePort
IP: 172.16.92.8
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32033/TCP
Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80
Session Affinity: None
No events.
You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.
yes, you would need to use NodePort.
When you hit the service, the destPort should be equal to NodePort.
The destIP for the service should be considered local by the nodes. E.g. you could use the hostIP of one of the nodes..
A load-balancer helps because it would handle situations where your node went down, but other nodes could still process the service..
if you're running a cluster on bare metal or not at a provider that provides the load balancer, you can also define the port to be a hostPort on your pod
you define your container, and ports
containers:
- name: ningx
image: nginx
ports:
- containerPort: 80
hostPort: 80
name: http
this will bind the container to the host networking and use the port defined.
The 2 limitations here are obviously:
1) You can only have one of these pods on each host maximum.
2) The IP is the host IP of the node it binds to
this is essentially how the cloud provider load balancers work in a way.
Using the new DaemonSet features, it's possible to define what node the pod will land on and fix the IP. However that necessarily impair the high availability aspect, but at some point there is not much choice as DNS load balancing will not avoid forwarding to a dead nodes

Resources