Can't access kubernetes nodeport service - docker

I don't think I miss anything, but my angular app doesn't seem to be able to contact the service I exposed trough Kubernetes.
Whenever I try to call the exposed nodeport on my localhost, I get a connection refused.
The deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: society-api-gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: society-api-gateway-deployment
template:
metadata:
labels:
app: society-api-gateway-deployment
spec:
containers:
- name: society-api-gateway-deployment
image: tbusschaert/society-api-gateway:latest
ports:
- containerPort: 80
The service file
apiVersion: v1
kind: Service
metadata:
name: society-api-gateway-service
spec:
type: NodePort
selector:
app: society-api-gateway-deployment
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
I double checked, the call doesn't reach my pod, and this is the error I get when going the call:
I'm using minikube and kubectl on my local machine.
I'm out of options, tried everything I though it could be, thanks in advance.
EDIT 1:
So after following the suggestions, i used the node IP to call the service:
I changed the IP in my angular project, now I get a connection timeout:
As for the port forward, I get a permission error:

So, as I thought, the problem was related to minikube not opening up to my localhost.
First of all I didn't need a NodePort, but the LoadBalancer also fit my need, so my API gateway became a LoadBalancer.
Second, when using minikube, to achieve what I wanted ( running kubernetes on my local machine and my angular client also being on my local machine ), you have to create a minikube tunnel, exactly how they explain it here: https://minikube.sigs.k8s.io/docs/handbook/accessing/#run-tunnel-in-a-separate-terminal

From doc, you see that the templeate will be like this <NodeIP>:<NodePort>.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So first, From kubectl get node -o wide comamnd take the NodeIP.
then try with the <NodeIP>:<NodePort>. For example, If the NodeIP is 172.19.0.2 then try 172.19.0.2:30001 with your sub url.
Or, Another way is with port-forwarding. In a terminal first try port-forwarding with kubectl port-forward svc/society-api-gateway-service 80:80. Then use the url you have tried with localhost.

Related

How can I correctly forward traffic from a container to a NodePort service with Kubernetes?

I am running Minikube on an m1 mac with the docker daemon. I have a container in a pod serving HTTP on port 7777; according to the documentation, I can use a combination of a nodeport and the minikube service command to expose it to the local machine. My configuration yaml file is pretty simple as well:
apiVersion: v1
kind: Pod
metadata:
name: door-controls
labels:
type: door-controls
spec:
containers:
- image: door_controls
name: door-controls
imagePullPolicy: Never
ports:
- containerPort: 7777
name: httpz
---
apiVersion: v1
kind: Service
metadata:
name: door-control-service
spec:
type: NodePort
selector:
type: door-controls
ports:
- name: svc-http
protocol: TCP
port: 80
targetPort: httpz
Running this in minikube and then attempting to use minikube service will expose the running process on a random port. From a machine inside the network, I can wget the pod IP on port 7777 and get data back, so I know the pod is correctly serving traffic. I also can wget the door-control-service nodeport service from inside the network on port 80 and get traffic back, so I know that the door-control-service configuration is working. But no amount of futzing will allow me to access the door-control-service inside the network via the nodeport (which is randomly generated in the port ~30k range, and the browser launched by minikube service never returns data so I can't access it outside of that range either.
What am I doing wrong? Or more generally, how can I debug this issue? I am new to kubernetes and not sure where in the logs I should be looking for errors in the first place.

Creating a connection string in the cloud for Redis

I have a Redis pod, and I expect connection requests to this pod from different clusters and applications not running in the cloud.
Since Redis does not work with the http protocol, accessing as the route I have done below does not work with this connection string "route-redis.local:6379".
route.yml
apiVersion: v1
kind: Route
metadata:
name: redis
spec:
host: route-redis.local
to:
kind: Service
name: redis
service.yml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
name: redis
You may have encountered this situation. In short, is there any way to access to the redis pod via route? If not, how do you solve this problem?
You already discovered that Redis does not work via the HTTP protocol, which is correct as far as I know. Routes work by inspecting the HTTP Host header for each request, which will not work for Redis. This means that you will not be able to use Routes for non-HTTP workload.
Typically, such non-HTTP services are exposed via a Service and NodePorts. This means that each Worker Node that is part of your cluster will open this port and will forward the traffic to your application.
You can find more information in the Kubernetes documentation:
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
You can define a NodePort like so (this example is for MySQL, which is also non-HTTP workload):
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30036
name: http
selector:
name: mysql
Of course, your administrator may limit the access to these ports, so it may or may not be possible to use these types of services on your OpenShift cluster.
You can expose the tcp via ingress atleast nginx one
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

How to connect to minikube exposed Nodeport from other computers

I am trying to expose one of my applications running on minikube to outer world. I have already used a Nodeport and I can access the application within the same hist machine using a web browser.
But I need to expose this application to one of my friends who is living somewhere far, so he can see it in his browser too.
This is how my deployment.yaml files look like, should I use an Ingress or how can I do this using an ingress ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
How can I expose this deployment which is running a nodejs server to outside world?
You generally can’t. The networking is set up only for the host machine. You could probably use ngrok or something though?
You can use ngrok. For example
ngrok http 8000
This will generate piblicly accessable url.

Can't access an Express.js service via a Kubernetes NodePort service in a MicroK8s cluster

I have a simple Express.js server Dockerized and when I run it like:
docker run -p 3000:3000 mytag:my-build-id
http://localhost:3000/ responds just fine and also if I use the LAN IP of my workstation like http://10.44.103.60:3000/
Now if I deploy this to MicroK8s with a service deployment declaration like:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: "3000"
port: 3000
targetPort: 3000
status:
loadBalancer: {}
and pod specification like so (update 2019-11-05):
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: my-service
spec:
replicas: 1
selector:
matchLabels:
name: my-service
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-service
spec:
containers:
- image: mytag:my-build-id
name: my-service
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
status: {}
and obtain the exposed NodePort via kubectl get services to be 32750 and try to visit it on the MicroK8s host machine like so:
curl http://127.0.0.1:32750
then the request just hangs and if I try to visit the LAN IP of the MicroK8s host from my workstation at
http://192.168.191.248:32750/
then the request is immediately refused.
But, if I try to port forward into the pod with
kubectl port-forward my-service-5db955f57f-q869q 3000:3000
then http://localhost:3000/ works just fine.
So the pod deployment seems to be working fine and example services like the microbot-service work just fine on that cluster.
I've made sure the Express.js server listens on all IPs with
app.listen(port, '0.0.0.0', () => ...
So what can be the issue?
You need to add a selector to your service. This will tell Kubernetes how to find your deployment. Additionally you can use nodePort to specify the port number of your service. After doing that you will be able to curl your MicroK8s IP.
Your Service YAML should look like this:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
selector:
name: my-service
status:
loadBalancer: {}
the LAN IP of the MicroK8s host from my workstation
That is the central source of your misunderstanding; localhost, 127.0.0.1, and your machine's LAN IP have nothing to do with what is apparently a virtual machine running microk8s (which it would have been infinitely valuable to actually include that information in your question, rather than us having to deduce it from one buried sentence)
I've made sure the Express.js server listens on all IPs with
Based on what you reported later:
at http://192.168.191.248:32750/ then the request is immediately refused.
then it appears that your express server is not, in fact, listening on all interfaces. That explains why you can successfully port-forward into the Pod (which causes traffic to appear on the Pod's localhost) but not reach it from "outside" the Pod
You can also test that theory by using another Pod inside the cluster to curl its Pod IP on port 3000 (in order to side-step the Service and thus NodePort parts)
There is a small chance that you have misconfigured your Pod and Service relationship, but since you didn't post your PodSpec, and the behavior you are describing sounds a lot more like an express misconfiguration, we'll go with that until we have evidence to the contrary

How to preserve source IP from traffic arriving on a ClusterIP service with an external IP?

I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

Resources