As we know ,kube-proxy is used to proxy serive that could be accessed from external network via apiserver, does kube-proxy support to proxy https service in k8s or any other solution so that we could access it via apiserver ?
You need to expose your https pods via a service of type Nodeport, then you can access the https via the defined port on any node in the cluster (master or worker) because kube-proxy will forward the requests to your pods that are part of the service. NodePorts can be in the range of 30000-32767 by default.
Example configuration for an https service and deployment with nginx:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 443
name: nginx
targetPort: 443
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 443
kube-proxy iptables mode works on IP layer(Networking layer), it does not care if the packet is http or https.
Related
I have a node.js API run inside a Docker container on Kubernetes cluster within a pod.
The pod is connected to Kubernetes service of type LoadBalancer, so I can connect to it from outside, and also from the Swagger UI, by passing to the Swagger UI which is run as another Docker container on the same Kubernetes cluster an API IP address http://<API IP address>:<port>/swagger.json.
But in my case I would like to call the API endpoints via Swagger UI using the service name like this api-service.default:<port>/swagger.json instead of using an external API IP address.
For Swagger UI I' am using the latest version of swaggerapi/swagger-ui docker image from here: https://hub.docker.com/r/swaggerapi/swagger-ui
If I try to assign the api-service.default:<port>/swagger.json to Swagger-UI container environment variable then the Swagger UI result is: Failed with load API definition
Which I guess is obvious because the browser does not recognize the internal cluster service name.
Is there any way to communicate Swagger UI and API in Kubernetes cluster using service names?
--- Additional notes ---
The Swagger UI CORS error is misleading in that case. I am using this API from many other services.
I have also tested the API CORS using cURL.
I assume that swagger-ui container inside a pod can resolve that internal cluster service name, but the browser cannot because the browser works out of my Kubernetes cluster.
On my other web services running in the browser (out of my cluster) served on nginx which also consumes this API, I use the nginx reverse proxy mechanizm.
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
This mechanizm redirects my API request invoked from the browser level to the internal cluster service name: api-service.default:8080 where the nginx server is actually running. I mean the nginx is runnig on the cluster, browser not.
Unfortunately, I dont't how to achive this in that swagger ui case.
Swagger mainfest file:
# SERVICE
apiVersion: v1
kind: Service
metadata:
name: swagger-service
labels:
kind: swagger-service
spec:
selector:
tier: api-documentation
ports:
- protocol: 'TCP'
port: 80
targetPort: 8080
type: LoadBalancer
---
# DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: swagger-deployment
labels:
kind: swagger-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: api-documentation
template:
metadata:
labels:
tier: api-documentation
spec:
containers:
- name: swagger
image: swaggerapi/swagger-ui
imagePullPolicy: Always
env:
- name: URL
value: 'http://api-service.default:8080/swagger.json'
API manifest file:
# SERVICE
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
kind: api-service
spec:
selector:
tier: backend
ports:
- protocol: 'TCP'
port: 8080
targetPort: 8080
type: LoadBalancer
---
# DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
kind: api-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: backend
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: api
image: <my-api-image>:latest
I solved it by adding nginx reverse proxy to /etc/nginx/nginx.conf file in swagger UI container which redirects all requests ended with /swagger.json to the API service.
After this file changes you need to reload the nginx server: nginx -s reload
server {
listen 8080;
server_name localhost;
index index.html index.htm;
location /swagger.json {
proxy_pass http://api-service.default:8080/swagger.json;
}
location / {
absolute_redirect off;
alias /usr/share/nginx/html/;
expires 1d;
location ~* \.(?:json|yml|yaml)$ {
#SWAGGER_ROOT
expires -1;
include cors.conf;
}
include cors.conf;
}
Important is to assign only /swagger.json to ENV of the SwaggerUI continer. It is mandatory because requests must be routed to nginx in order to be resolved.
Swagger manifest
# SERVICE
apiVersion: v1
kind: Service
metadata:
name: swagger-service
labels:
kind: swagger-service
spec:
selector:
tier: api-documentation
ports:
- protocol: 'TCP'
port: 80
targetPort: 8080
type: LoadBalancer
---
# DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: swagger-deployment
labels:
kind: swagger-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: api-documentation
template:
metadata:
labels:
tier: api-documentation
spec:
containers:
- name: swagger
image: swaggerapi/swagger-ui
imagePullPolicy: Always
env:
- name: URL
value: '/swagger.json'
This question already has an answer here:
Kubernetes services are not accessible through nodeport with Desktop Docker setup
(1 answer)
Closed 2 years ago.
There is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-task-tracker-deployment
spec:
selector:
matchLabels:
app: my-task-tracker
replicas: 5
template:
metadata:
labels:
app: my-task-tracker
spec:
containers:
- name: hello-world
image: shaikezam/task-tracker:1.0
ports:
- containerPort: 8080
protocol: TCP
This is the service (NodePort):
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8085
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
Now, I try to access localhost:8085 or localhost:30001, and nothing happened.
I'm running using K8S in docker desktop.
Any suggestion what I'm doing wrong?
Target port should be 8080 in service yaml if that is what your container runs on as per your deployment yaml file.
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
=======
port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
NodePort exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified. You should be able to use your application on Nodeport as well.
In your case target port should be 8080 that is what is important for app to run ,you can listen to your app on port 8085 within your cluster by changing the port field in the yaml and externally by changing the Nodeport.
I am new to Kubernetes and Nginx Ingress tools and now i am trying to host MySql service using VHost in Nginx Ingress on AWS. I have created a file something like :
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
selector:
app: mysql
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http
containerPort: 3306
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql
labels:
app: mysql
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mysql.example.com
http:
paths:
- path: /
backend:
serviceName: mysql
servicePort: 3306
My LoadBalancer (created by Nginx Ingress) port configuration looks like :
80 (TCP) forwarding to 32078 (TCP)
Stickiness options not available for TCP protocols
443 (TCP) forwarding to 31480 (TCP)
Stickiness options not available for TCP protocols
mysql.example.com is pointing to my ELB.
I was expecting something like, from my local box i can connect to MySql if try something like :
mysql -h mysql.example.com -u root -P 80 -p
Which is not working out. Instead of NodePort if i try with LoadBalancer, its creating a new ELB for me which is working as expected.
I am not sure if this is right approach for what i want to achieve here. Please help me out if there is a way for achieving same using the Ingress with NodePort.
Kubernetes Ingress as a generic concept does not solve the issue of exposing/routing TCP/UDP services, as stated in https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md you should use custom configmaps if you want that with ingress. And please mind that it will never use hostname for routing as that is a feature of HTTP, not TCP.
I succeded to access MariaDB/MySQL hosted on Google Kubernetes Engine through ingress-nginx, using the hostname specified in the ingress created for the database Cluster IP.
As per the docs, simply create the config map and expose the port in the Service defined for the Ingress.
This helped me to figure how to set --tcp-services-configmap and --udp-services-configmap flags.
I'm a super beginner with Kubernetes and I'm trying to imagine how to split my monolithic application in different micro services.
Let's say i'm writing my micro services application in Flask and each of them exposes some endpoints like:
Micro service 1:
/v1/user-accounts
Micro service 2:
/v1/savings
Micro service 3:
/v1/auth
If all of them were running as blueprints in a monolithic application all of them would be prefixed with the same IP, that is the IP of the host server my application is running on, like 10.12.234.69, eg.
http://10.12.234.69:5000/v1/user-accounts
Now, deploying those 3 "blueprints" on 3 different POD/Nodes in Kubernetes will change the IP address of each endpoint having maybe 10.12.234.69, than 10.12.234.70 or 10.12.234.75
How can i write an application that keep the URL reference constant even if the IP address changes?
Would a Load Balancer Service do the trick?
Maybe the Service Registry feature of Kubernetes does the "DNS" part for me?
I know It can sounds very obvious question but still I cannot find any reference/example to this simple problem.
Thanks in advance!
EDIT: (as follow up to simon answer)
questions:
given the fact that the Ingress service spawns a load balancer and is possible to have all the routes reachable from the http/path prefixed by the IP (http://<ADDRESS>/v1/savings) of the load balancer, how can I associate IP to the load balancer to match the ip of the pod on which flask web server is running?
in case I add other sub routes to the same paths, like /v1/savings/get and /v1/savings/get/id/<var_id> , should i update all of them in the ingress http path in order for them to be reachable by the load balancer ?
A load balancer is what you are looking for.
Kubernetes services will make your pods accessible under a given hostname cluster-internally.
If you want to make your services accessible from outside the cluster under a single IP and different paths, you can use a load balancer and Kubernetes HTTP Ingresses. They define under which domain and path a service should be mapped and can be fetched by a load balancer to build its configuration.
Example based on your micro service architecture:
Mocking applications
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-accounts
spec:
template:
metadata:
labels:
app: user-accounts
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
args:
- /bin/bash
- "-c"
- echo 'server { location /v1/user-accounts { return 200 "user-accounts"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: savings
spec:
template:
metadata:
labels:
app: savings
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/savings { return 200 "savings"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
template:
metadata:
labels:
app: auth
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/auth { return 200 "auth"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
These deployments represent your services and just return their name via HTTP under /v1/name.
Mapping applications to services
---
kind: Service
apiVersion: v1
metadata:
name: user-accounts
spec:
type: NodePort
selector:
app: user-accounts
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: savings
spec:
type: NodePort
selector:
app: savings
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: auth
spec:
type: NodePort
selector:
app: auth
ports:
- protocol: TCP
port: 80
These services create an internal IP and a domain resolving to it based on their names, mapping them to the pods found by a given selector. Applications running in the same cluster namespace will be able to reach them under user-accounts, savings and auth.
Making services reachable via load balancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
spec:
rules:
- http:
paths:
- path: /v1/user-accounts
backend:
serviceName: user-accounts
servicePort: 80
- path: /v1/savings
backend:
serviceName: savings
servicePort: 80
- path: /v1/auth
backend:
serviceName: auth
servicePort: 80
This Ingress defines under which paths the different services should be reachable. Verify your Ingress via kubectl get ingress:
# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example * 80 1m
If you are running on Google Container Engine, there is an Ingress controller running in your cluster which will spawn a Google Cloud Load Balancer when you create a new Ingress object. Under the ADDRESS column of the above output, there will be an IP displayed under which you can access your applications:
# curl http://<ADDRESS>/v1/user-accounts
user-accounts⏎
# curl http://<ADDRESS>/v1/savings
savings⏎
# curl http://<ADDRESS>/v1/auth
auth⏎
I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves