By kubernetes nodeport I can't get access to the application - docker

I make configuration that my service is builded on 8080 port.
My docker image is also on 8080.
I put my ReplicaSet with configuration like this
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-app-backend-rs
spec:
containers:
- name: my-app-backend
image: go-my-app-backend
ports:
- containerPort: 8080
imagePullPolicy: Never
And finally I create service of type NodePort also on port 8080 with configuration like below:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app-backend-rs
name: my-app-backend-svc-nodeport
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: my-app-backend
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
Type: NodePort
IP: 10.110.250.176
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31859/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I not understanding and what am I doing wrong? Can anyone explain me that?

From your output,i'm seeing below endpoint is created.So it seems one pod is ready to serve for this nodeport service.So label is not an issue now.
Endpoints: 172.17.0.6:8080
First ensure you are able to access the app by running curl http://podhostname:8080 command, once you are login into the pod using kubectl exec -it podname sh(if curl is installed on image which running in that pod container).If not run curl ambassador container pods as sidecar and from that pod try to access the http://<>:8080 and ensure it is working.
Remember you can't access the nodeport service as localhost since it will be pointing to your master node,if you are running this command from master node.
You have to access this service by below methods.
<CLUSTERIP:PORT>---In you case:10.110.250.176:80
<1st node's IP>:31859
<2nd node's IP>:31859

I tried to use curl after kubectl exec -it podname sh
In this very example the double dash is missed in front of the sh command.
Please note that correct syntax can be checked anytime with the kubectl exec -h and shall be like:
kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]
if you have only one container per Pod it can be simplified to:
kubectl exec -it PODNAME -- COMMAND
The caveat of not specyfying the container is that in case of multiple containers on that Pod, you'll be conected to the first one :)
Example: kubectl exec -it pod/frontend-57gv5 -- curl localhost:80
I tried also hit on 10.110.250.176:80:31859 but this is incorrect I think. Sorry but I'm beginner at network stuff.
yes, that is not correct, as the value for :port occurs twice . In that example it is needed to hit 10.110.250.176:80 (as 10.110.250.176 is a "Cluster_IP" )
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
It depends on where you are going to run that command.
In this very case it is not clear what exactly you have put into ReplicaSet config (if Service's selector matches with ReplicaSet's labels), so let me explain "how this supposed to work".
Assuming we have the following ReplicaSet (the below example is slightly modified version of official documentation on topic ):
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-rs
labels:
app: guestbook
tier: frontend-meta
spec:
# modify replicas according to your case
replicas: 2
selector:
matchLabels:
tier: frontend-label
template:
metadata:
labels:
tier: frontend-label ## shall match spec.selector.matchLabels.tier
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
And the following service:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-svc-tier-nodeport
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
tier: frontend-label ## shall match labels from ReplicaSet spec
We can create ReplicaSet (RS) and Service. As a result, we shall be able to see RS, Pods, Service and End Points:
kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
frontend-rs 2 2 2 10m php-redis gcr.io/google_samples/gb-frontend:v3 tier=frontend-label
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
frontend-rs-76sgd 1/1 Running 0 11m 10.12.0.31 gke-6v3n
frontend-rs-fxxq8 1/1 Running 0 11m 10.12.1.33 gke-m7z8
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
frontend-svc-tier-nodeport NodePort 10.0.5.10 <none> 80:32113/TCP 9m41s tier=frontend-label
kubectl get ep -o wide
NAME ENDPOINTS AGE
frontend-svc-tier-nodeport 10.12.0.31:80,10.12.1.33:80 10m
kubectl describe svc/frontend-svc-tier-nodeport
Selector: tier=frontend-label
Type: NodePort
IP: 10.0.5.10
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32113/TCP
Endpoints: 10.12.0.31:80,10.12.1.33:80
Important thing that we can see from my example is that Port was set 80:32113/TCP for the service we have created.
That shall allow us accessing "gb-frontend:v3" app in a few different ways:
from inside cluster: curl 10.0.5.10:80
(CLUSTER-IP:PORT) or curl frontend-svc-tier-nodeport:80
from external network (internet): curl PUBLIC_IP:32113
here PUBLIC_IP is the IP you can reach Node in your cluster. All the nodes in cluster are listening on a NodePort and forward requests according t the Service's selector.
from the Node : curl localhost:32113
Hope that helps.

Related

Ingress configuration issue in Docker kubernetes cluster

I am recently new to Kubernetes and Docker in general and am experiencing issues.
I am running a single local Kubernetes cluster via Docker and am using skaffold to control the build up and teardown of objects within the cluster. When I run skaffold dev the build seems successful, yet when I attempt to make a request to my cluster via Postman the request hangs. I am using an ingress-nginx controller and I feel the bug lies somewhere here. My request handling logic is simple and so I feel the issue is not in the route handling but the configuration of my cluster, specifically with the ingress controller. I will post below my skaffold yaml config and my ingress yaml config.
Any help is greatly appreciated as I have struggled with this bug for sometime.
ingress yaml config :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Note that I have a redirect in my /etc/hosts file from ticketing.dev to 127.0.0.1
Auth service yaml config :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: conorl47/auth
---
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold yaml config :
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: conorl47/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
For installing the ingress nginx controller I followed the installation instructions at https://kubernetes.github.io/ingress-nginx/deploy/ , namely the Docker desktop installation instruction.
After running that command I see the following two Docker containers running in Docker desktop
The two services created in the ingress-nginx namespace are :
❯ k get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.6.146 <pending> 80:30036/TCP,443:30465/TCP 22m
ingress-nginx-controller-admission ClusterIP 10.108.8.26 <none> 443/TCP 22m
When I kubectl describe both of these services I see the following :
❯ kubectl describe service ingress-nginx-controller -n ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.6.146
IPs: 10.103.6.146
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30036/TCP
Endpoints: 10.1.0.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30465/TCP
Endpoints: 10.1.0.10:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32485
Events: <none>
and :
❯ kubectl describe service ingress-nginx-controller-admission -n ingress-nginx
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.8.26
IPs: 10.108.8.26
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.1.0.10:8443
Session Affinity: None
Events: <none>
As it seems, you have made the ingress service of type LoadBalancer, this will usually provision an external loadbalancer from your cloud provider of choice. That's also why It's still pending. Its waiting for the loadbalancer to be ready, but it will never happen.
If you want to have that ingress service reachable outside your cluster, you need to use type NodePort.
Since their docs are not great on this point, and it seems to be by default like this. You could download the content of https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml and modify it before applying. Or you use helm, then you probably can configure this.
You could also do it in this dirty fashion.
kubectl apply --dry-run=client -o yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml \
| sed s/LoadBalancer/NodePort/g \
| kubectl apply -f -
You could also edit in place.
kubectl edit svc ingress-nginx-controller-admission -n ingress-nginx

Can't Access Kubernetes Service Exposed via NodePort

I'm using minikube to test kubernetes on latest MacOS.
Here are my relevant YAMLs:
namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: micro
labels:
name: micro
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: adderservice
spec:
replicas: 1
template:
metadata:
labels:
run: adderservice
spec:
containers:
- name: adderservice
image: jeromesoung/adderservice:0.0.1
ports:
- containerPort: 8080
service.yml
apiVersion: v1
kind: Service
metadata:
name: adderservice
labels:
run: adderservice
spec:
ports:
- port: 8080
name: main
protocol: TCP
targetPort: 8080
selector:
run: adderservice
type: NodePort
After running minikube start, the steps I took to deploy is as follows:
kubectl create -f namespace.yml to create the namespace
kubectl config set-context minikube --namespace=micro
kubectl create -f deployment.yml
kubectl create -f service.yml
Then, I get the NodeIP and NodePort with below commands:
kubectl get services to get the NodePort
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adderservice NodePort 10.99.155.255 <none> 8080:30981/TCP 21h
minikube ip to get the nodeIP
$ minikube ip
192.168.99.103
But when I do curl, I always get Connection Refused like this:
$ curl http://192.168.99.103:30981/add/1/2
curl: (7) Failed to connect to 192.168.99.103 port 30981: Connection refused
So I checked node, pod, deployment and endpoint as follows:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 23h v1.13.3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
adderservice-5b567df95f-9rrln 1/1 Running 0 23h
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
adderservice 1 1 1 1 23h
$ kubectl get endpoints
NAME ENDPOINTS AGE
adderservice 172.17.0.5:8080 21h
I also checked service list from minikube with:
$ minikube service -n micro adderservice --url
http://192.168.99.103:30981
I've read many posts regarding accessing k8s service via NodePorts. To my knowledge, I should be able to access the app with no problem. The only thing I suspect is that I'm using a custom namespace. Will this cause the access issue?
I know namespace will change the DNS, so, to be complete, I ran below commands also:
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.micro
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: kubernetes.micro
Address: 198.105.244.130
Name: kubernetes.micro
Address: 104.239.207.44
Could anyone help me out? Thank you.
The error Connection Refused mostly means that the application inside the container does not accept requests on the targeted interface or not mapped through the expected ports.
Things you need to be aware of:
Make sure that your application bind to 0.0.0.0 so it can receive requests from outside the container either externally as in public or through other containers.
Make sure that your application is actually listening on the containerPort and targetPort as expect
In your case you have to make sure that ADDERSERVICE_SERVICE_HOST equals to 0.0.0.0 and ADDERSERVICE_SERVICE_PORT equals to 8080 which should be the same value as targetPort in service.yml and containerPort in deployment.yml
Not answering the question but if someone who googled comes here like me who faced the same issue. Here is my solution for the same problem.
My Mac System IP and minikube IP are different.
So localhost:port didn't work instead try getting IP
minikube ip
Later, use that IP:Port to access the app and it works.
Check if service is really listening on 8080.
Try telnet within the container.
telnet 127.0.0.1 8080
.
.
.
telnet 172.17.0.5 8080

Kubernetes and Docker: how to let two service to communicate correctly

I have two Java microservices (caller.jar which calls called.jar)
We can set the caller service http port through an env-var CALLERPORT and the address of called service through an env-var CALLEDADDRESS.
So caller uses two env-var.
We must set also called service env-var CALLEDPORT in order to set the specific http port on which called service is listening for http requests.
I don't know exactly how to simply expose these variables from a Dockerfile, in order to set them using Kubernetes.
Here is how I made the two Dockerfiles:
Dockerfile of caller
FROM openjdk:8-jdk-alpine
# ENV CALLERPORT (it's own port)
# ENV CALLEDADDRESS (the other service address)
ADD caller.jar /
CMD ["java", "-jar", "caller.jar"]
Dockerfile of called
FROM openjdk:8-jdk-alpine
# ENV CALLEDPORT (it's own port)
ADD called.jar /
CMD ["java", "-jar", "called.jar"]
With these I've made two Docker images:
myaccount/caller
myaccount/called
Then I've made the two deployments.yaml in order to let K8s deploy (on minikube) the two microservices using replicas and loadbalancers.
deployment-caller.yaml
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: caller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: caller
labels:
app: caller
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: caller
tier: caller
strategy:
type: Recreate
template:
metadata:
labels:
app: caller
tier: caller
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
And deployment-called.yaml
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
selector:
app: called
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: called
labels:
app: called
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: called
tier: called
strategy:
type: Recreate
template:
metadata:
labels:
app: called
tier: called
spec:
containers:
- image: myaccount/called
name: called
env:
- name: CALLEDPORT
value: "8081"
ports:
- containerPort: 8081
name: called
IMPORTANT:
The single services work well if called singularly (such as calling an healthcheck endpoint) but, when calling the endpoint which involves the communication between the two services, then there is this error:
java.net.UnknownHostException: called
The pods are correctly running and active, but i guess the problem is the part of deployment.yaml in which I must define how to find the pointed service, so here:
spec:
containers:
- image: myaccount/caller
name: caller
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
Neither
called
nor
called-loadbalancer
nor
http://caller
kubectl get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/called-855cc4d89b-4gf97 1/1 Running 0 3m23s 172.17.0.4 minikube <none> <none>
pod/called-855cc4d89b-6268l 1/1 Running 0 3m23s 172.17.0.5 minikube <none> <none>
pod/caller-696956867b-9n7zc 1/1 Running 0 106s 172.17.0.6 minikube <none> <none>
pod/caller-696956867b-djwsn 1/1 Running 0 106s 172.17.0.7 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/called-loadbalancer LoadBalancer 10.99.14.91 <pending> 8081:30161/TCP 171m app=called
service/caller-loadbalancer LoadBalancer 10.107.9.108 <pending> 8080:30078/TCP 65m app=caller
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 177m <none>
works if put in that line of the deployment.yaml.
So what to put in this line?
The short answer is you don't need to expose them in the Dockerfile. You can set any environment variables you want when you start a container and they don't have to be specified upfront in the Dockerfile.
You can verify this by starting a container using 'docker run' with '-e' to set env vars and '-it' to get an interactive session. The echo the value of your env var and you'll see it is set.
You can also get a terminal session with one of the containers in your running kubernetes Pod with 'kubectl exec' (https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/). From there you can echo environment variables from there to see that they are set. You can see them more quickly with 'kubectl describe pod ' after getting the pod name with 'kubectl get pods'.
Since you are having problems, you also want to check whether your services are working correctly. Since you are using minikube you can do 'minikube service ' to check they can be accessed externally. You'll also want to check the internal access - see Accessing spring boot controller endpoint in kubernetes pod
Your approach of using service names and ports is valid. With a bit of debugging you should be able to get it working. Your setup is similar to an illustration I did in https://dzone.com/articles/kubernetes-namespaces-explained so referring to that might help (except you are using env vars directly instead of through a configmap but it amounts to the same).
I think that in the caller you are injecting the wrong port in the env var - you are putting the caller's own port and not the port of what it is trying to call.
To access services inside Kubernetes you should use this DNS:
http://caller-loadbalancer.default.svc.cluster.local:8080
http://called-loadbalancer.default.svc.cluster.local:8081
First of all - it's completely impossible to understand what do you want. Your post starts from:
We can set...
We must set...
Nobody here doesn't know what do you want to do and it could be much more useful to see some definition of done you are expecting.
This having been said now, I have to turn to your substantive question...
env:
- name: CALLERPORT
value: "8080"
- name: CALLEDADDRESS
value: called-loadbalancer # WHAT TO PUT HERE?!
ports:
- containerPort: 8080
name: caller
This things will be exported by k8s automatically. For example, i have service kibana with a port:80 in service definition:
svc/kibana ClusterIP 10.222.81.249 <none> 80/TCP 1y app=kibana
this is how I can get this within the different pod which is in the same namespace:
root#some-pod:/app# env | grep -i kibana
KIBANA_SERVICE_PORT=80
KIBANA_SERVICE_HOST=10.222.81.249
Moving forward, why do you use LoadBalancer? Without any cloud it will be the similar to NodePort, but seems like ClusterIP is all you need.
Next, service ports can be the same and there won't be any port collisions, just because ClusterIP is unique every time and therefore socket will be unique for each service. Your services could be described like this:
apiVersion: v1
kind: Service
metadata:
name: caller-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <--------------------
targetPort: 8080
selector:
app: caller
apiVersion: v1
kind: Service
metadata:
name: called-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 <------------------
targetPort: 8081
selector:
app: called
That would simplify using service names just by names without ports specifying:
http://caller-loadbalancer.default.svc.cluster.local
http://called-loadbalancer.default.svc.cluster.local
or
http://caller-loadbalancer.default
http://called-loadbalancer.default
or (within the similar namespace):
http://caller-loadbalancer
http://called-loadbalancer
or (depending on the lib)
caller-loadbalancer
called-loadbalancer
Same things about containerPort/targetPort! Why do you use 8081 and 8080? Who cares about internal container ports? I agree different cases happen, but in this case you have a single process inside and you are definitely not going to run some more processes there, are you? So they also could be the same.
I'd like to advise you to use stackoverflow different way. Do not ask how to do something your way, much better to ask how to do something the best way

Kubernetes NodePort doesn't return the response from the container

I've developed a containerized Flask application and I want to deploy it with Kubernetes. However, I can't connect the ports of the Container with the Service correctly.
Here is my Deployment file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: <my-app-name>
spec:
replicas: 1
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: <container-name>
image: <container-image>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http-port
---
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
selector:
app: flaskapp
ports:
- name: http
protocol: TCP
targetPort: 5000
port: 5000
nodePort: 30013
type: NodePort
When I run kubectl get pods, everything seems to work fine:
NAME READY STATUS RESTARTS AGE
<pod-id> 1/1 Running 0 7m
When I run kubectl get services, I get the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
<service-name> NodePort 10.105.247.63 <none> 5000:30013/TCP
...
However, when I give the following URL to the browser: 10.105.247.63:30013, the browser keeps loading but never returns the data from the application.
Does anyone know where the problem could be? It seems that the service is not connected to the container's port.
30013 is the port on the Node not in the cluster IP. To get a reply you would have to connect to <IP-address-of-the-node>:30013. To get the list of nodes you can:
kubectl get nodes -o=wide
You can also go through the CLUSTER-IP but you'll have to use the exposed port 5000: 10.105.247.63:5000

Accessing Erlang/Elixir node inside MiniKube pod

I've got Deployment which has n nodes and I have service that exposes 4369. I want to connect to one of those nodes via IEX. I am using MiniKube for my local development Kubernetes cluster which binds to some IP and I can access it's dashboard.
I tried calling minikube service thatServiceName, but after few moments of w8ing it ends work and does not output link that it supposed to give me.
apiVersion: v1
kind: Service
metadata:
name: erlangpl-demo-mnesia
labels:
app: erlangpl-demo-mnesia
spec:
clusterIP: None
ports:
- port: 10000
targetPort: 10000
name: disterl-mesh-0
- port: 4369
targetPort: 4369
name: epmd
selector:
app: erlangpl-demo-mnesia
type: ClusterIP
Could anyone let me know what am I missing or what am I doing wrong?
type: ClusterIP with clusterIP: None looks fishy to me. I do not think that minikube provides support for that service type.
I would try using type: NodePort, which should expose the service on the minikube IP.
You can connect to the pod directly:
kubectl exec -it your-pod-name
it defaults to bash, which I didn't had so I have to do:
kubectl exec -it your-pod-name -- /bin/sh
I hope that helps.

Resources