Kubernetes Service External IP not being assigned - docker

I have the following deployment yaml:
apiVersion: v1
kind: Namespace
metadata:
name: authentication
labels:
name: authentication
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authentication-deployment
namespace: authentication
spec:
replicas: 2
template:
metadata:
labels:
app: authentication
spec:
containers:
- name: authentication
image: blueapp/authentication:0.0.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: authentication-service
namespace: authentication
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: authentication-deployment
type: LoadBalancer
externalName: authentication
Im pretty new to kubernetes but my understanding of what Im trying to do is create a namespace, in that namespace create a deployment of 2 pods and then create a load balancer to distribute traffic to those pods.
When I run
$ kubectl create -f deployment.yaml
everything creates fine, but then the service never gets assigned an external IP
Is there anything obvious that may be causing this?

Your service is of type NodePort.
To get a load balancer assigned to your service you should be using a LoadBalancer service type:
type: LoadBalancer
See documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types

External IPs get assigned only in supported cloud environments, providing that your cloud provider is configured correctly.
Observe the error messages in the kube-controller-manager logs when you create your service.

Related

How to communicate Swagger UI and API in Kubernetes cluster using service names?

I have a node.js API run inside a Docker container on Kubernetes cluster within a pod.
The pod is connected to Kubernetes service of type LoadBalancer, so I can connect to it from outside, and also from the Swagger UI, by passing to the Swagger UI which is run as another Docker container on the same Kubernetes cluster an API IP address http://<API IP address>:<port>/swagger.json.
But in my case I would like to call the API endpoints via Swagger UI using the service name like this api-service.default:<port>/swagger.json instead of using an external API IP address.
For Swagger UI I' am using the latest version of swaggerapi/swagger-ui docker image from here: https://hub.docker.com/r/swaggerapi/swagger-ui
If I try to assign the api-service.default:<port>/swagger.json to Swagger-UI container environment variable then the Swagger UI result is: Failed with load API definition
Which I guess is obvious because the browser does not recognize the internal cluster service name.
Is there any way to communicate Swagger UI and API in Kubernetes cluster using service names?
--- Additional notes ---
The Swagger UI CORS error is misleading in that case. I am using this API from many other services.
I have also tested the API CORS using cURL.
I assume that swagger-ui container inside a pod can resolve that internal cluster service name, but the browser cannot because the browser works out of my Kubernetes cluster.
On my other web services running in the browser (out of my cluster) served on nginx which also consumes this API, I use the nginx reverse proxy mechanizm.
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
This mechanizm redirects my API request invoked from the browser level to the internal cluster service name: api-service.default:8080 where the nginx server is actually running. I mean the nginx is runnig on the cluster, browser not.
Unfortunately, I dont't how to achive this in that swagger ui case.
Swagger mainfest file:
# SERVICE
apiVersion: v1
kind: Service
metadata:
name: swagger-service
labels:
kind: swagger-service
spec:
selector:
tier: api-documentation
ports:
- protocol: 'TCP'
port: 80
targetPort: 8080
type: LoadBalancer
---
# DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: swagger-deployment
labels:
kind: swagger-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: api-documentation
template:
metadata:
labels:
tier: api-documentation
spec:
containers:
- name: swagger
image: swaggerapi/swagger-ui
imagePullPolicy: Always
env:
- name: URL
value: 'http://api-service.default:8080/swagger.json'
API manifest file:
# SERVICE
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
kind: api-service
spec:
selector:
tier: backend
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
# DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
kind: api-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: backend
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: api
image: <my-api-image>:latest
I solved it by adding nginx reverse proxy to /etc/nginx/nginx.conf file in swagger UI container which redirects all requests ended with /swagger.json to the API service.
After this file changes you need to reload the nginx server: nginx -s reload
server {
listen 8080;
server_name localhost;
index index.html index.htm;
location /swagger.json {
proxy_pass http://api-service.default:8080/swagger.json;
}
location / {
absolute_redirect off;
alias /usr/share/nginx/html/;
expires 1d;
location ~* \.(?:json|yml|yaml)$ {
#SWAGGER_ROOT
expires -1;
include cors.conf;
}
include cors.conf;
}
Important is to assign only /swagger.json to ENV of the SwaggerUI continer. It is mandatory because requests must be routed to nginx in order to be resolved.
Swagger manifest
# SERVICE
apiVersion: v1
kind: Service
metadata:
name: swagger-service
labels:
kind: swagger-service
spec:
selector:
tier: api-documentation
ports:
- protocol: 'TCP'
port: 80
targetPort: 8080
type: LoadBalancer
---
# DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: swagger-deployment
labels:
kind: swagger-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: api-documentation
template:
metadata:
labels:
tier: api-documentation
spec:
containers:
- name: swagger
image: swaggerapi/swagger-ui
imagePullPolicy: Always
env:
- name: URL
value: '/swagger.json'

Kubernetes load balancer External IP pending

I create RabbitMQ cluster inside Kubernetes. I am trying to add loadbalancer. But I cant get the loadbalancer External-IP it is still pending.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
run: rabbitmq
spec:
type: NodePort
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
selector:
run: rabbitmq
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
run: rabbitmq
template:
metadata:
labels:
run: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
imagePullPolicy: Always
And My load balancer is below. I gave loadbalancer
nodePort is random,
port number is from kubernetes created RabbitMQ mqtt port number,
target port number is from kubernetes created RabbitMQ UI port number
apiVersion: v1
kind: Service
metadata:
name: loadbalanceservice
labels:
app: rabbitmq
spec:
selector:
app: rabbitmq
type: LoadBalancer
ports:
- nodePort: 31022
port: 30601
targetPort: 31533
service type Loadbalancer only works on cloud providers which support external load balancers.Setting the type field to LoadBalancer provisions a load balancer for your Service.It's pending because the environment that you are in is not supporting Loadbalancer type of service.In a non cloud environment an easier option would be to use nodeport type service. Here is a guide on using Nodeport to access service from outside the cluster.
LoadBalancer service doesn't work on bare metal clusters. Your LoadBalancer service will act as NodePort service as well. You can use nodeIP:nodePort combination to access your service from outside the cluster.
If you do want an external IP with custom port combination to access your service, then look into metallb which implements support for LoadBalancer type services on bare metal clusters.

Nginx ingress controller logs keeps telling me that i have wrong pod information

I am running two nodes in kubernetes cluster. I am able to deploy my microservices with 3 replicas, and its service. Now I am trying to have nginx ingress controller to expose my service but i am getting this error from the logs:
unexpected error obtaining pod information: unable to get POD information (missing POD_NAME or POD_NAMESPACE environment variable)
I have set a namespace of development in my cluster, that is where my microservice is deploy and also nginx controller. I do not understand how nginx picks up my pods or how i am passing pods name or pod_namespace.
here is my nginx controller:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: mycha-deploy
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
and here my deployment:
#dDeployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 3
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: us.gcr.io/##########/mycha-frontend_kubernetes_rrk8s
ports:
- containerPort: 80
thank you
Your nginx ingress controller deployment yaml looks incomplete and does not have below among many other items.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Follow the installation docs and use yamls from here
To expose your service using a Nginx Ingress, you need to configure it before.
Follow the installation guide for you kubernetes installation.
You also need a service to 'group' the containers of your application.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector
...
For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
As you can see, the service will discover your containers based on the label selector configured in your deployment.
To check the container's label selector: kubectl get pods -owide -l app=mycha-app
Service yaml
Apply the follow yaml to create a service for your deployment:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
spec:
selector:
app: mycha-app <= This is the selector
ports:
- protocol: TCP
port: 8080
targetPort: 80
Check if the service is created with kubectl get svc.
Test the app using port-forwarding from your desktop at http://localhost:8080:
kubectl port-forward svc/mycha-service 8080:8080
nginx-ingress yaml
The last part is the nginx-ingress. Supposing your app has the url mycha-service.com and only the root '/' path:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-mycha-service
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mycha-service.com <= app url
http:
paths:
- path: /
backend:
serviceName: mycha-service <= Here you define what is the service that your ingress will use to send the requests.
servicePort: 80
Check the ingress: kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-mycha-service mycha-service.com XX.X.X.X 80 63s
Now you are able to reach your application using the url mycha-service.com and ADDRESS displayed by command above.
I hope it helps =)

How Does Dynamic Service Discovery Work When Using Docker Compose Or Kubernetes?

Let's say I am creating a chat app with microservice architecture. I have 2 services:
Gateway service: responsible for user authentication (API endpoint /api/v1/users), and routing requests to appropriate service.
Messaging service: responsible for creating, retrieving, updating, and deleting messages (API endpoint /api/v1/messages).
If I use Docker Compose or Kubernetes, how does my gateway service know which service should it forwards to if there is a request sending to /api/v1/messages API endpoint?
I used to write my own dynamic service discovery middleware (https://github.com/zicodeng/tahc-z/blob/master/servers/gateway/handlers/dsd.go). The idea is that I pre-register services with API endpoints they are responsible for. And my gateway service relies on request resource path to decide which service this request should be forwarded to. But how do you do this with Docker Compose or Kubernetes? Do I still need to keep my own version of dynamic service discovery middleware?
Thanks in advance!
If you are using Kubernetes, here are the high level steps:
Create your micro-service Deployments/Workloads using your docker images
Create Services pointing to these deployments
Create Ingress using Path Based rules pointing to the services
Here is sample manifest/yaml files: (change docker images, ports etc as needed)
apiVersion: v1
kind: Service
metadata:
name: svc-gateway
spec:
ports:
- port: 80
selector:
app: gateway
---
apiVersion: v1
kind: Service
metadata:
name: svc-messaging
spec:
ports:
- port: 80
selector:
app: messaging
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-gateway
spec:
replicas: 1
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: gateway/image:v1.0
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-messaging
spec:
replicas: 1
template:
metadata:
labels:
app: messaging
spec:
containers:
- name: messaging
image: messaging/image:v1.0
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-for-chat-application
spec:
rules:
- host: chat.example.com
http:
paths:
- backend:
serviceName: svc-gateway
servicePort: 80
path: /api/v1/users
- backend:
serviceName: svc-messaging
servicePort: 80
path: /api/v1/messages
If you have other containers running in the same namespace and would like to communicate with these services you can directly use their service names.
For example:
curl http://svc-messaging or curl http://svc-gateway
You don't need to run your own service discovery, that's taken care by Kubernetes!
Some visuals:
Step 1:
Step 2:
Step 3:

External endpoint of Kubernetes dashboard

I was just wondering how to manually set the external endpoint used by the Kubernetes web dashboard.
After creating the namespace kube-system, I ran the following:
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
Is there a flag I can use to specify which tcp port to use for external access? As far as I can tell it's just randomly assigning one. I've looked through the documentation but I'm having a hard time finding a solution. Any help would be appreciated.
You can specify the desired port as the nodePort in the yaml spec that you use to create the service. In this case, where the yaml file you linked to defines the service as:
- kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
You would want to define it as below, assuming your desired port number is 33333:
- kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
nodePort: 33333
selector:
app: kubernetes-dashboard

Resources