I've got Deployment which has n nodes and I have service that exposes 4369. I want to connect to one of those nodes via IEX. I am using MiniKube for my local development Kubernetes cluster which binds to some IP and I can access it's dashboard.
I tried calling minikube service thatServiceName, but after few moments of w8ing it ends work and does not output link that it supposed to give me.
apiVersion: v1
kind: Service
metadata:
name: erlangpl-demo-mnesia
labels:
app: erlangpl-demo-mnesia
spec:
clusterIP: None
ports:
- port: 10000
targetPort: 10000
name: disterl-mesh-0
- port: 4369
targetPort: 4369
name: epmd
selector:
app: erlangpl-demo-mnesia
type: ClusterIP
Could anyone let me know what am I missing or what am I doing wrong?
type: ClusterIP with clusterIP: None looks fishy to me. I do not think that minikube provides support for that service type.
I would try using type: NodePort, which should expose the service on the minikube IP.
You can connect to the pod directly:
kubectl exec -it your-pod-name
it defaults to bash, which I didn't had so I have to do:
kubectl exec -it your-pod-name -- /bin/sh
I hope that helps.
Related
I am running Minikube on an m1 mac with the docker daemon. I have a container in a pod serving HTTP on port 7777; according to the documentation, I can use a combination of a nodeport and the minikube service command to expose it to the local machine. My configuration yaml file is pretty simple as well:
apiVersion: v1
kind: Pod
metadata:
name: door-controls
labels:
type: door-controls
spec:
containers:
- image: door_controls
name: door-controls
imagePullPolicy: Never
ports:
- containerPort: 7777
name: httpz
---
apiVersion: v1
kind: Service
metadata:
name: door-control-service
spec:
type: NodePort
selector:
type: door-controls
ports:
- name: svc-http
protocol: TCP
port: 80
targetPort: httpz
Running this in minikube and then attempting to use minikube service will expose the running process on a random port. From a machine inside the network, I can wget the pod IP on port 7777 and get data back, so I know the pod is correctly serving traffic. I also can wget the door-control-service nodeport service from inside the network on port 80 and get traffic back, so I know that the door-control-service configuration is working. But no amount of futzing will allow me to access the door-control-service inside the network via the nodeport (which is randomly generated in the port ~30k range, and the browser launched by minikube service never returns data so I can't access it outside of that range either.
What am I doing wrong? Or more generally, how can I debug this issue? I am new to kubernetes and not sure where in the logs I should be looking for errors in the first place.
I don't think I miss anything, but my angular app doesn't seem to be able to contact the service I exposed trough Kubernetes.
Whenever I try to call the exposed nodeport on my localhost, I get a connection refused.
The deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: society-api-gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: society-api-gateway-deployment
template:
metadata:
labels:
app: society-api-gateway-deployment
spec:
containers:
- name: society-api-gateway-deployment
image: tbusschaert/society-api-gateway:latest
ports:
- containerPort: 80
The service file
apiVersion: v1
kind: Service
metadata:
name: society-api-gateway-service
spec:
type: NodePort
selector:
app: society-api-gateway-deployment
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
I double checked, the call doesn't reach my pod, and this is the error I get when going the call:
I'm using minikube and kubectl on my local machine.
I'm out of options, tried everything I though it could be, thanks in advance.
EDIT 1:
So after following the suggestions, i used the node IP to call the service:
I changed the IP in my angular project, now I get a connection timeout:
As for the port forward, I get a permission error:
So, as I thought, the problem was related to minikube not opening up to my localhost.
First of all I didn't need a NodePort, but the LoadBalancer also fit my need, so my API gateway became a LoadBalancer.
Second, when using minikube, to achieve what I wanted ( running kubernetes on my local machine and my angular client also being on my local machine ), you have to create a minikube tunnel, exactly how they explain it here: https://minikube.sigs.k8s.io/docs/handbook/accessing/#run-tunnel-in-a-separate-terminal
From doc, you see that the templeate will be like this <NodeIP>:<NodePort>.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So first, From kubectl get node -o wide comamnd take the NodeIP.
then try with the <NodeIP>:<NodePort>. For example, If the NodeIP is 172.19.0.2 then try 172.19.0.2:30001 with your sub url.
Or, Another way is with port-forwarding. In a terminal first try port-forwarding with kubectl port-forward svc/society-api-gateway-service 80:80. Then use the url you have tried with localhost.
I make configuration that my service is builded on 8080 port.
My docker image is also on 8080.
I put my ReplicaSet with configuration like this
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-app-backend-rs
spec:
containers:
- name: my-app-backend
image: go-my-app-backend
ports:
- containerPort: 8080
imagePullPolicy: Never
And finally I create service of type NodePort also on port 8080 with configuration like below:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app-backend-rs
name: my-app-backend-svc-nodeport
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: my-app-backend
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
Type: NodePort
IP: 10.110.250.176
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31859/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I not understanding and what am I doing wrong? Can anyone explain me that?
From your output,i'm seeing below endpoint is created.So it seems one pod is ready to serve for this nodeport service.So label is not an issue now.
Endpoints: 172.17.0.6:8080
First ensure you are able to access the app by running curl http://podhostname:8080 command, once you are login into the pod using kubectl exec -it podname sh(if curl is installed on image which running in that pod container).If not run curl ambassador container pods as sidecar and from that pod try to access the http://<>:8080 and ensure it is working.
Remember you can't access the nodeport service as localhost since it will be pointing to your master node,if you are running this command from master node.
You have to access this service by below methods.
<CLUSTERIP:PORT>---In you case:10.110.250.176:80
<1st node's IP>:31859
<2nd node's IP>:31859
I tried to use curl after kubectl exec -it podname sh
In this very example the double dash is missed in front of the sh command.
Please note that correct syntax can be checked anytime with the kubectl exec -h and shall be like:
kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]
if you have only one container per Pod it can be simplified to:
kubectl exec -it PODNAME -- COMMAND
The caveat of not specyfying the container is that in case of multiple containers on that Pod, you'll be conected to the first one :)
Example: kubectl exec -it pod/frontend-57gv5 -- curl localhost:80
I tried also hit on 10.110.250.176:80:31859 but this is incorrect I think. Sorry but I'm beginner at network stuff.
yes, that is not correct, as the value for :port occurs twice . In that example it is needed to hit 10.110.250.176:80 (as 10.110.250.176 is a "Cluster_IP" )
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
It depends on where you are going to run that command.
In this very case it is not clear what exactly you have put into ReplicaSet config (if Service's selector matches with ReplicaSet's labels), so let me explain "how this supposed to work".
Assuming we have the following ReplicaSet (the below example is slightly modified version of official documentation on topic ):
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-rs
labels:
app: guestbook
tier: frontend-meta
spec:
# modify replicas according to your case
replicas: 2
selector:
matchLabels:
tier: frontend-label
template:
metadata:
labels:
tier: frontend-label ## shall match spec.selector.matchLabels.tier
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
And the following service:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-svc-tier-nodeport
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
tier: frontend-label ## shall match labels from ReplicaSet spec
We can create ReplicaSet (RS) and Service. As a result, we shall be able to see RS, Pods, Service and End Points:
kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
frontend-rs 2 2 2 10m php-redis gcr.io/google_samples/gb-frontend:v3 tier=frontend-label
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
frontend-rs-76sgd 1/1 Running 0 11m 10.12.0.31 gke-6v3n
frontend-rs-fxxq8 1/1 Running 0 11m 10.12.1.33 gke-m7z8
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
frontend-svc-tier-nodeport NodePort 10.0.5.10 <none> 80:32113/TCP 9m41s tier=frontend-label
kubectl get ep -o wide
NAME ENDPOINTS AGE
frontend-svc-tier-nodeport 10.12.0.31:80,10.12.1.33:80 10m
kubectl describe svc/frontend-svc-tier-nodeport
Selector: tier=frontend-label
Type: NodePort
IP: 10.0.5.10
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32113/TCP
Endpoints: 10.12.0.31:80,10.12.1.33:80
Important thing that we can see from my example is that Port was set 80:32113/TCP for the service we have created.
That shall allow us accessing "gb-frontend:v3" app in a few different ways:
from inside cluster: curl 10.0.5.10:80
(CLUSTER-IP:PORT) or curl frontend-svc-tier-nodeport:80
from external network (internet): curl PUBLIC_IP:32113
here PUBLIC_IP is the IP you can reach Node in your cluster. All the nodes in cluster are listening on a NodePort and forward requests according t the Service's selector.
from the Node : curl localhost:32113
Hope that helps.
I have a simple Express.js server Dockerized and when I run it like:
docker run -p 3000:3000 mytag:my-build-id
http://localhost:3000/ responds just fine and also if I use the LAN IP of my workstation like http://10.44.103.60:3000/
Now if I deploy this to MicroK8s with a service deployment declaration like:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: "3000"
port: 3000
targetPort: 3000
status:
loadBalancer: {}
and pod specification like so (update 2019-11-05):
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: my-service
spec:
replicas: 1
selector:
matchLabels:
name: my-service
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-service
spec:
containers:
- image: mytag:my-build-id
name: my-service
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
status: {}
and obtain the exposed NodePort via kubectl get services to be 32750 and try to visit it on the MicroK8s host machine like so:
curl http://127.0.0.1:32750
then the request just hangs and if I try to visit the LAN IP of the MicroK8s host from my workstation at
http://192.168.191.248:32750/
then the request is immediately refused.
But, if I try to port forward into the pod with
kubectl port-forward my-service-5db955f57f-q869q 3000:3000
then http://localhost:3000/ works just fine.
So the pod deployment seems to be working fine and example services like the microbot-service work just fine on that cluster.
I've made sure the Express.js server listens on all IPs with
app.listen(port, '0.0.0.0', () => ...
So what can be the issue?
You need to add a selector to your service. This will tell Kubernetes how to find your deployment. Additionally you can use nodePort to specify the port number of your service. After doing that you will be able to curl your MicroK8s IP.
Your Service YAML should look like this:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
selector:
name: my-service
status:
loadBalancer: {}
the LAN IP of the MicroK8s host from my workstation
That is the central source of your misunderstanding; localhost, 127.0.0.1, and your machine's LAN IP have nothing to do with what is apparently a virtual machine running microk8s (which it would have been infinitely valuable to actually include that information in your question, rather than us having to deduce it from one buried sentence)
I've made sure the Express.js server listens on all IPs with
Based on what you reported later:
at http://192.168.191.248:32750/ then the request is immediately refused.
then it appears that your express server is not, in fact, listening on all interfaces. That explains why you can successfully port-forward into the Pod (which causes traffic to appear on the Pod's localhost) but not reach it from "outside" the Pod
You can also test that theory by using another Pod inside the cluster to curl its Pod IP on port 3000 (in order to side-step the Service and thus NodePort parts)
There is a small chance that you have misconfigured your Pod and Service relationship, but since you didn't post your PodSpec, and the behavior you are describing sounds a lot more like an express misconfiguration, we'll go with that until we have evidence to the contrary
I'm using minikube to test kubernetes on latest MacOS.
Here are my relevant YAMLs:
namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: micro
labels:
name: micro
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: adderservice
spec:
replicas: 1
template:
metadata:
labels:
run: adderservice
spec:
containers:
- name: adderservice
image: jeromesoung/adderservice:0.0.1
ports:
- containerPort: 8080
service.yml
apiVersion: v1
kind: Service
metadata:
name: adderservice
labels:
run: adderservice
spec:
ports:
- port: 8080
name: main
protocol: TCP
targetPort: 8080
selector:
run: adderservice
type: NodePort
After running minikube start, the steps I took to deploy is as follows:
kubectl create -f namespace.yml to create the namespace
kubectl config set-context minikube --namespace=micro
kubectl create -f deployment.yml
kubectl create -f service.yml
Then, I get the NodeIP and NodePort with below commands:
kubectl get services to get the NodePort
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adderservice NodePort 10.99.155.255 <none> 8080:30981/TCP 21h
minikube ip to get the nodeIP
$ minikube ip
192.168.99.103
But when I do curl, I always get Connection Refused like this:
$ curl http://192.168.99.103:30981/add/1/2
curl: (7) Failed to connect to 192.168.99.103 port 30981: Connection refused
So I checked node, pod, deployment and endpoint as follows:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 23h v1.13.3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
adderservice-5b567df95f-9rrln 1/1 Running 0 23h
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
adderservice 1 1 1 1 23h
$ kubectl get endpoints
NAME ENDPOINTS AGE
adderservice 172.17.0.5:8080 21h
I also checked service list from minikube with:
$ minikube service -n micro adderservice --url
http://192.168.99.103:30981
I've read many posts regarding accessing k8s service via NodePorts. To my knowledge, I should be able to access the app with no problem. The only thing I suspect is that I'm using a custom namespace. Will this cause the access issue?
I know namespace will change the DNS, so, to be complete, I ran below commands also:
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.micro
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: kubernetes.micro
Address: 198.105.244.130
Name: kubernetes.micro
Address: 104.239.207.44
Could anyone help me out? Thank you.
The error Connection Refused mostly means that the application inside the container does not accept requests on the targeted interface or not mapped through the expected ports.
Things you need to be aware of:
Make sure that your application bind to 0.0.0.0 so it can receive requests from outside the container either externally as in public or through other containers.
Make sure that your application is actually listening on the containerPort and targetPort as expect
In your case you have to make sure that ADDERSERVICE_SERVICE_HOST equals to 0.0.0.0 and ADDERSERVICE_SERVICE_PORT equals to 8080 which should be the same value as targetPort in service.yml and containerPort in deployment.yml
Not answering the question but if someone who googled comes here like me who faced the same issue. Here is my solution for the same problem.
My Mac System IP and minikube IP are different.
So localhost:port didn't work instead try getting IP
minikube ip
Later, use that IP:Port to access the app and it works.
Check if service is really listening on 8080.
Try telnet within the container.
telnet 127.0.0.1 8080
.
.
.
telnet 172.17.0.5 8080