I have this docker-compose.yml and Docker for Desktop Windows with a local kubernetes cluster (the default docker-desktop context) running:
version: '3'
services:
web:
image: customnode
ports:
- "3000:3000"
labels:
kompose.service.type: nodeport
datastore:
image: custommongo
ports:
- "27017:27017"
docker-compose up -d works perfectly and exposes my NodeJS on port 3000 on 127.0.0.1.
I'm trying to migrate that the my k8s cluster, so following https://kompose.io/getting-started/
Page above says "If you don’t already have a Kubernetes cluster running, minikube is the best way to get started"... I am already running the OOTB Docker for Desktop cluster so I assume I don't need minikube.
kompose convert
INFO Kubernetes file "datastore-service.yaml" created
INFO Kubernetes file "web-service.yaml" created
INFO Kubernetes file "datastore-deployment.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
Here are the web-deployment and web-service YAMLS:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.service.type: nodeport
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
spec:
containers:
image: customnode
name: web
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
status: {}
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.service.type: nodeport
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
io.kompose.service: web
type: NodePort
status:
loadBalancer: {}
Finally, running kompose up:
kompose up
[36mINFO[0m We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
INFO Deploying application in "default" namespace
INFO Successfully created Service: datastore
INFO Successfully created Service: web
INFO Successfully created Deployment: datastore
INFO Successfully created Deployment: web
Output of kubectl get svc:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
datastore ClusterIP 10.103.***.*** <none> 27017/TCP 76s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
web NodePort 10.106.***.*** <none> 3000:32033/TCP 76s
As you can see, no external IP as I'd expect. I'm sure this is a lack of knowledge on my part, rather than a bug, so what am I missing?
Extenal IP is assigned only for LoadBalancer type service. A LoadBalancer controller must be installed on the cluster for LoadBalancer service to work. Else LoadBalancer service behaves exactly as a NodePort service. Most cloud providers have support for LoadBalancer services.
For NodePort type services, the service binds to a random port in node port range on all the nodes. In your case you can see, the service is bound to port 32033 - 3000:32033/TCP.
The Node Port range is configured for as an argument to Kubernetes API server with the option --service-node-port-range (by default 30000-32767). When you create a NodePort type service, a random free port is picked from this range. If you want to choose a custom port, you can specify nodePort attribute in the Port object.
Eg:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.service.type: nodeport
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
nodePort: 30002 ###### You can choose node port here if needed
selector:
io.kompose.service: web
type: NodePort ####### Change this line to LoadBalancer if you want an external IP
status:
loadBalancer: {}
Related
I am recently new to Kubernetes and Docker in general and am experiencing issues.
I am running a single local Kubernetes cluster via Docker and am using skaffold to control the build up and teardown of objects within the cluster. When I run skaffold dev the build seems successful, yet when I attempt to make a request to my cluster via Postman the request hangs. I am using an ingress-nginx controller and I feel the bug lies somewhere here. My request handling logic is simple and so I feel the issue is not in the route handling but the configuration of my cluster, specifically with the ingress controller. I will post below my skaffold yaml config and my ingress yaml config.
Any help is greatly appreciated as I have struggled with this bug for sometime.
ingress yaml config :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Note that I have a redirect in my /etc/hosts file from ticketing.dev to 127.0.0.1
Auth service yaml config :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: conorl47/auth
---
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold yaml config :
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: conorl47/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
For installing the ingress nginx controller I followed the installation instructions at https://kubernetes.github.io/ingress-nginx/deploy/ , namely the Docker desktop installation instruction.
After running that command I see the following two Docker containers running in Docker desktop
The two services created in the ingress-nginx namespace are :
❯ k get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.6.146 <pending> 80:30036/TCP,443:30465/TCP 22m
ingress-nginx-controller-admission ClusterIP 10.108.8.26 <none> 443/TCP 22m
When I kubectl describe both of these services I see the following :
❯ kubectl describe service ingress-nginx-controller -n ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.6.146
IPs: 10.103.6.146
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30036/TCP
Endpoints: 10.1.0.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30465/TCP
Endpoints: 10.1.0.10:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32485
Events: <none>
and :
❯ kubectl describe service ingress-nginx-controller-admission -n ingress-nginx
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.8.26
IPs: 10.108.8.26
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.1.0.10:8443
Session Affinity: None
Events: <none>
As it seems, you have made the ingress service of type LoadBalancer, this will usually provision an external loadbalancer from your cloud provider of choice. That's also why It's still pending. Its waiting for the loadbalancer to be ready, but it will never happen.
If you want to have that ingress service reachable outside your cluster, you need to use type NodePort.
Since their docs are not great on this point, and it seems to be by default like this. You could download the content of https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml and modify it before applying. Or you use helm, then you probably can configure this.
You could also do it in this dirty fashion.
kubectl apply --dry-run=client -o yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml \
| sed s/LoadBalancer/NodePort/g \
| kubectl apply -f -
You could also edit in place.
kubectl edit svc ingress-nginx-controller-admission -n ingress-nginx
I create RabbitMQ cluster inside Kubernetes. I am trying to add loadbalancer. But I cant get the loadbalancer External-IP it is still pending.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
run: rabbitmq
spec:
type: NodePort
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
selector:
run: rabbitmq
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
run: rabbitmq
template:
metadata:
labels:
run: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
imagePullPolicy: Always
And My load balancer is below. I gave loadbalancer
nodePort is random,
port number is from kubernetes created RabbitMQ mqtt port number,
target port number is from kubernetes created RabbitMQ UI port number
apiVersion: v1
kind: Service
metadata:
name: loadbalanceservice
labels:
app: rabbitmq
spec:
selector:
app: rabbitmq
type: LoadBalancer
ports:
- nodePort: 31022
port: 30601
targetPort: 31533
service type Loadbalancer only works on cloud providers which support external load balancers.Setting the type field to LoadBalancer provisions a load balancer for your Service.It's pending because the environment that you are in is not supporting Loadbalancer type of service.In a non cloud environment an easier option would be to use nodeport type service. Here is a guide on using Nodeport to access service from outside the cluster.
LoadBalancer service doesn't work on bare metal clusters. Your LoadBalancer service will act as NodePort service as well. You can use nodeIP:nodePort combination to access your service from outside the cluster.
If you do want an external IP with custom port combination to access your service, then look into metallb which implements support for LoadBalancer type services on bare metal clusters.
I set up a simple redis ClusterIP service to be accessed by a php LoadBalancer service inside the Cluster. The php log shows the connection timeout error. The redis service is not accessible.
'production'.ERROR: Operation timed out {"exception":"[object] (RedisException(code: 0):
Operation timed out at /var/www/html/vendor/laravel/framework/src/Illuminate/Redis
/html/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php(109):
Redis->connect('redis-svc', '6379', 0, '', 0, 0)
My redis service is quite simple so I don't know what went wrong:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis-svc
spec:
selector:
app: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
type: ClusterIP
I verify redis-svc is running, so why it can't be access by other service
kubectl get service redis-svc git:k8s*
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-svc ClusterIP 10.101.164.225 <none> 6379/TCP 22m
This SO kubernetes cannot ping another service said ping doesn't work with service's cluster IP(indeed) how do I verify whether redis-svc can be accessed or not ?
---- update ----
My first question was a silly mistake but I still don't know how do I verify whether the service can be accessed or not (by its name). For example I changed the service name to be the same as the deployment name and I found php failed to access redis again.
kubectl get endpoints did not help now.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
...
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis
...
my php is another service with env set the redis's service name
spec:
containers:
- env:
- name: REDIS_HOST # the php code access this variable
value: redis-svc #changed to "redis" when redis service name changed to "redis"
----- update 2------
The reason I can' set my redis service name to "redis" is b/c "kubelet adds a set of environment variables for each active Service" so with the name "redis", there will be a REDIS_PORT=tcp://10.101.210.23:6379 which overwrite my own REDIS_PORT=6379
But my php just expect the value of REDIS_PORT to be 6379
I ran the yaml configuration given by you and it created the deployment and service. However when I run the below commands:
>>> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d14h
redis-svc ClusterIP 10.105.31.201 <none> 6379/TCP 109s
>>>> kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.99.116:8443 5d14h
redis-svc <none> 78s
As you see, the endpoints for redis-svc is none, it means that the service doesn't have an endpoint to connect to. You are using selector labels as app: redis in the redis-svc. But the pods doesn't have the selector label defined in the service. Adding the label app: redis to the pod template will work. The complete working yaml configuration of deployment will look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
app: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
I've developed a containerized Flask application and I want to deploy it with Kubernetes. However, I can't connect the ports of the Container with the Service correctly.
Here is my Deployment file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: <my-app-name>
spec:
replicas: 1
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: <container-name>
image: <container-image>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http-port
---
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
selector:
app: flaskapp
ports:
- name: http
protocol: TCP
targetPort: 5000
port: 5000
nodePort: 30013
type: NodePort
When I run kubectl get pods, everything seems to work fine:
NAME READY STATUS RESTARTS AGE
<pod-id> 1/1 Running 0 7m
When I run kubectl get services, I get the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
<service-name> NodePort 10.105.247.63 <none> 5000:30013/TCP
...
However, when I give the following URL to the browser: 10.105.247.63:30013, the browser keeps loading but never returns the data from the application.
Does anyone know where the problem could be? It seems that the service is not connected to the container's port.
30013 is the port on the Node not in the cluster IP. To get a reply you would have to connect to <IP-address-of-the-node>:30013. To get the list of nodes you can:
kubectl get nodes -o=wide
You can also go through the CLUSTER-IP but you'll have to use the exposed port 5000: 10.105.247.63:5000
I'm trying to expose my api so I can send request to it. However when I used the command minikube service api --url I get nothing. All my pods are running fine according to kubectl get pods so I'm abit stuck about what this could be.
api-1007925651-0rt1n 1/1 Running 0 26m
auth-1671920045-0f85w 1/1 Running 0 26m
blankit-app 1/1 Running 5 5d
logging-2525807854-2gfwz 1/1 Running 0 26m
mongo-1361605738-0fdq4 1/1 Running 0 26m
jwl:.build jakewlace$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api 10.0.0.194 <none> 3001/TCP 23m
auth 10.0.0.36 <none> 3100/TCP 23m
kubernetes 10.0.0.1 <none> 443/TCP 5d
logging 10.0.0.118 <none> 3200/TCP 23m
mongo 10.0.0.132 <none> 27017/TCP 23m
jwl:.build jakewlace$
jwl:.build jakewlace$ minikube service api --url
jwl:.build jakewlace$
Any help would be massively appreciated, thank you.
I realised that the question here could be perceived as being minimal, but that is because I'm not sure what more information I could show from the tutorials I've been following it should just work. If you need more information please do let me know I will let you know.
EDIT:
api-service.yml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
ports:
- name: "3001"
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
api-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: blankit/web:0.0.1
name: api
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
resources: {}
restartPolicy: Always
status: {}
Your configuration is fine, but only missing one thing.
There are many types of Services in Kubernetes, but in this case you should know about two of them:
ClusterIP Services:
Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default.
NodePort:
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Note:
If you have a multi-node cluster and you've exposed a NodePort Service, you can access is from any other node on the same port, not necessarily the same node the pod is deployed onto.
So, getting back to your service, you should specify the service type in your spec:
kind: Service
apiVersion: v1
metadata:
...
spec:
type: NodePort
selector:
...
ports:
- protocol: TCP
port: 3001
Now if you minikube service api --url, it should return a URL like http://<NodeIP>:<NodePort>.
Note: The default Kubernetes configuration will chose a random port from 30000-32767. But you can override that if needed.
Useful references:
Kubernetes / Publishing services - service types
Kubernetes / Connect a Front End to a Back End Using a Service