minikube service %servicename% --url return nothing - docker

I'm trying to expose my api so I can send request to it. However when I used the command minikube service api --url I get nothing. All my pods are running fine according to kubectl get pods so I'm abit stuck about what this could be.
api-1007925651-0rt1n 1/1 Running 0 26m
auth-1671920045-0f85w 1/1 Running 0 26m
blankit-app 1/1 Running 5 5d
logging-2525807854-2gfwz 1/1 Running 0 26m
mongo-1361605738-0fdq4 1/1 Running 0 26m
jwl:.build jakewlace$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api 10.0.0.194 <none> 3001/TCP 23m
auth 10.0.0.36 <none> 3100/TCP 23m
kubernetes 10.0.0.1 <none> 443/TCP 5d
logging 10.0.0.118 <none> 3200/TCP 23m
mongo 10.0.0.132 <none> 27017/TCP 23m
jwl:.build jakewlace$
jwl:.build jakewlace$ minikube service api --url
jwl:.build jakewlace$
Any help would be massively appreciated, thank you.
I realised that the question here could be perceived as being minimal, but that is because I'm not sure what more information I could show from the tutorials I've been following it should just work. If you need more information please do let me know I will let you know.
EDIT:
api-service.yml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
ports:
- name: "3001"
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
api-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: blankit/web:0.0.1
name: api
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
resources: {}
restartPolicy: Always
status: {}

Your configuration is fine, but only missing one thing.
There are many types of Services in Kubernetes, but in this case you should know about two of them:
ClusterIP Services:
Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default.
NodePort:
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Note:
If you have a multi-node cluster and you've exposed a NodePort Service, you can access is from any other node on the same port, not necessarily the same node the pod is deployed onto.
So, getting back to your service, you should specify the service type in your spec:
kind: Service
apiVersion: v1
metadata:
...
spec:
type: NodePort
selector:
...
ports:
- protocol: TCP
port: 3001
Now if you minikube service api --url, it should return a URL like http://<NodeIP>:<NodePort>.
Note: The default Kubernetes configuration will chose a random port from 30000-32767. But you can override that if needed.
Useful references:
Kubernetes / Publishing services - service types
Kubernetes / Connect a Front End to a Back End Using a Service

Related

Minikube: Exposing REST API from Docker container

I currently have a Dockerised Spring Boot application with exposed Java REST APIs that I deploy on to my NUC (just a remote machine) and can connect to it from my Mac, via the NUCs static IP address. Both machines are on the same network.
I am now looking into hosting the Docker application in Kubernetes (Minikube)
(using this tutorial https://medium.com/bb-tutorials-and-thoughts/how-to-run-java-rest-api-on-minikube-4b564ea982cc).
I have used the Kompose tool from Kubernetes to convert my Docker compose files into Kubernetes deployments and services files. One of the services I'm trying to get working first simply opens up port 8080 and has a number of REST resources. Everything seems to be up and running, but I cannot access the REST resources from my Mac (or even the NUC itself) with a curl -v command.
After getting around a small issue with my Docker images (needing built to Minikube's internal Docker images repo), I can successfully deploy my services and deployments. There are a number of others, but for the purposes of getting past this step, I'll just include the one:
$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-hhgn8 1/1 Running 0 7h
kube-system etcd-minikube 1/1 Running 0 7h
kube-system kube-apiserver-minikube 1/1 Running 0 7h
kube-system kube-controller-manager-minikube 1/1 Running 0 7h
kube-system kube-proxy-rszpv 1/1 Running 0 7h
kube-system kube-scheduler-minikube 1/1 Running 0 7h
kube-system storage-provisioner 1/1 Running 0 7h
meanwhileinhell apigw 1/1 Running 0 6h54m
meanwhileinhell apigw-75bc5z1f5j-cklxt 1/1 Running 0 6h54m
$ kubectl get service apigw
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apigw NodePort 10.107.116.239 <none> 8080:32327/TCP 6h53m
$ kubectl cluster-info
Kubernetes master is running at https://192.168.44.2:8443
KubeDNS is running at https://192.168.44.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
However, I cannot hit this master IP address, or any expected open port using the static IP of my NUC. I have tried to use the service types LoadBalancer and NodePort for the service but the former hangs on pending for the external IP.
I have played about a little with exposing ports and port forwarding but haven't been able to get anything working (port 7000 is just an arbitrary number):
kubectl port-forward apigw 7000:8080
kubectl expose deployment apigw --port=8080 --target-port=8080
Here is my apigw deployment, service and pod yaml files:
apigw-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: apigw
name: apigw
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: apigw
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.network/networkinhell: "true"
io.kompose.service: apigw
spec:
containers:
image: meanwhileinhell/api-gw:latest
name: apigw
ports:
- containerPort: 8080
resources: {}
imagePullPolicy: Never
volumeMounts:
- mountPath: /var/log/gateway
name: combined-logs
hostname: apigw
restartPolicy: Always
volumes:
- name: combined-logs
persistentVolumeClaim:
claimName: combined-logs
status: {}
apigw-service.yaml
apiVersion: v1
kind: Service
metadata:
name: apigw
labels:
run: apigw
spec:
ports:
- port: 8080
protocol: TCP
selector:
app: apigw
type: NodePort
apigw-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
io.kompose.service: apigw
name: apigw
spec:
containers:
- image: meanwhileinhell/api-gw:latest
name: apigw
imagePullPolicy: Never
resources: {}
ports:
- containerPort: 8080
Using kubectl create -f to create the services.
Ubuntu 18.04.5 LTS
Minikube v1.15.0
KubeCtl v1.19.4

Can't see my app on browser deploying with Kubernetes

hope you all well!
I need to see my app on the browser but I believe that I'm missing something here and hope you can help me with this.
[root#kubernetes Docker]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-app2-56d5c786dd-n7mqq 1/1 Running 0 19m
pod/nginx-86c57db685-bxkpl 1/1 Running 0 13h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31h
service/my-app2 ClusterIP 10.101.108.199 <none> 8085/TCP 12m
service/nginx NodePort 10.106.14.144 <none> 80:30525/TCP 13h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-app2 1/1 1 1 19m
deployment.apps/nginx 1/1 1 1 13h
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-app2-56d5c786dd 1 1 1 19m
replicaset.apps/nginx-86c57db685 1 1 1 13h
Overall you can see that everything is working fine right, looks the same to me.
To open this on my browser I'm using my IP address from Slave node where the container is allocated.
On my app I'm mapping the Hello like this #RequestMapping("/Hello")
On my dockerfile to build my image i used this:
[root#kubernetes project]# cat Dockerfile
FROM openjdk:8
COPY microservico-0.0.1-SNAPSHOT.jar microservico-0.0.1-SNAPSHOT.jar
#WORKDIR /usr/src/microservico-0.0.1-SNAPSHOT.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "microservico-0.0.1-SNAPSHOT.jar"]
So at the end, I think I need to call for my app this way.
---> ip:8085/Hello
[root#kubernetes project]# telnet kubeslave 8085
Trying 192.168.***.***...
telnet: connect to address 192.168.***.***: Connection refused
but I still see nothing...
Here is my deploy and service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app2
labels:
app: app
spec:
selector:
matchLabels:
app: app
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: app
role: master
tier: backend
spec:
containers:
- name: appcontainer
image: *****this is ok*****:my-java-app
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8085
apiVersion: v1
kind: Service
metadata:
name: my-app2
labels:
app: app
role: master
tier: backend
spec:
ports:
- port: 8085
targetPort: 8085
selector:
app: app
role: master
tier: backend
You have create a service which is of type ClusterIP(default). This type of service is only for accessing from inside the kubernetes cluster.For accessing it from browser you need to expose the pod via LoadBalancer or Nodeport service. LoadBalancer only works if you are one of supported public cloud otherwise Nodeport need to be used.
https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
Other than using service you can use kubectl proxy to access it as well.
If you are on Minikube then follow this
https://kubernetes.io/docs/tutorials/hello-minikube/

Not able to access service from the public ip in kubernetes

I am using kubernetes and run one service. Service is running and is showing in service. But i am not able to access it from the public ip of the instance. Below is my deployment file.
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
Here is my list of service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-service NodePort 10.106.242.181 <none> 80:31807/TCP 9m5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
But when i check the same service from the follwing telnet with the public ip of cluster and node. It is not responding.
telnet public-ip:31807
Any type of help will be appreciable.
What do you mean by cluster IP? Do you mean the node that acts as kunernetes master? It won't work if you use master IP. Because masters don't have deployments scheduled due to security concerns.
Exposing a service via nodeport means that the service listens to a particular port in each of the worker nodes. So you can access the kunernetes worker nodes with the nodeports and get response. However if you created the cluster using cloud providers like aws, the worker nodes security groups are secured. Probably, you need to edit the security groups of worker nodes to access the service.

Why can't I access a ClusterIP service thru its name?

I set up a simple redis ClusterIP service to be accessed by a php LoadBalancer service inside the Cluster. The php log shows the connection timeout error. The redis service is not accessible.
'production'.ERROR: Operation timed out {"exception":"[object] (RedisException(code: 0):
Operation timed out at /var/www/html/vendor/laravel/framework/src/Illuminate/Redis
/html/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php(109):
Redis->connect('redis-svc', '6379', 0, '', 0, 0)
My redis service is quite simple so I don't know what went wrong:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis-svc
spec:
selector:
app: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
type: ClusterIP
I verify redis-svc is running, so why it can't be access by other service
kubectl get service redis-svc git:k8s*
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-svc ClusterIP 10.101.164.225 <none> 6379/TCP 22m
This SO kubernetes cannot ping another service said ping doesn't work with service's cluster IP(indeed) how do I verify whether redis-svc can be accessed or not ?
---- update ----
My first question was a silly mistake but I still don't know how do I verify whether the service can be accessed or not (by its name). For example I changed the service name to be the same as the deployment name and I found php failed to access redis again.
kubectl get endpoints did not help now.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
...
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis
...
my php is another service with env set the redis's service name
spec:
containers:
- env:
- name: REDIS_HOST # the php code access this variable
value: redis-svc #changed to "redis" when redis service name changed to "redis"
----- update 2------
The reason I can' set my redis service name to "redis" is b/c "kubelet adds a set of environment variables for each active Service" so with the name "redis", there will be a REDIS_PORT=tcp://10.101.210.23:6379 which overwrite my own REDIS_PORT=6379
But my php just expect the value of REDIS_PORT to be 6379
I ran the yaml configuration given by you and it created the deployment and service. However when I run the below commands:
>>> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d14h
redis-svc ClusterIP 10.105.31.201 <none> 6379/TCP 109s
>>>> kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.99.116:8443 5d14h
redis-svc <none> 78s
As you see, the endpoints for redis-svc is none, it means that the service doesn't have an endpoint to connect to. You are using selector labels as app: redis in the redis-svc. But the pods doesn't have the selector label defined in the service. Adding the label app: redis to the pod template will work. The complete working yaml configuration of deployment will look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
app: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}

Kubernetes NodePort doesn't return the response from the container

I've developed a containerized Flask application and I want to deploy it with Kubernetes. However, I can't connect the ports of the Container with the Service correctly.
Here is my Deployment file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: <my-app-name>
spec:
replicas: 1
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: <container-name>
image: <container-image>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http-port
---
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
selector:
app: flaskapp
ports:
- name: http
protocol: TCP
targetPort: 5000
port: 5000
nodePort: 30013
type: NodePort
When I run kubectl get pods, everything seems to work fine:
NAME READY STATUS RESTARTS AGE
<pod-id> 1/1 Running 0 7m
When I run kubectl get services, I get the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
<service-name> NodePort 10.105.247.63 <none> 5000:30013/TCP
...
However, when I give the following URL to the browser: 10.105.247.63:30013, the browser keeps loading but never returns the data from the application.
Does anyone know where the problem could be? It seems that the service is not connected to the container's port.
30013 is the port on the Node not in the cluster IP. To get a reply you would have to connect to <IP-address-of-the-node>:30013. To get the list of nodes you can:
kubectl get nodes -o=wide
You can also go through the CLUSTER-IP but you'll have to use the exposed port 5000: 10.105.247.63:5000

Resources