I am new to Kubernetes and Nginx Ingress tools and now i am trying to host MySql service using VHost in Nginx Ingress on AWS. I have created a file something like :
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
selector:
app: mysql
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http
containerPort: 3306
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql
labels:
app: mysql
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mysql.example.com
http:
paths:
- path: /
backend:
serviceName: mysql
servicePort: 3306
My LoadBalancer (created by Nginx Ingress) port configuration looks like :
80 (TCP) forwarding to 32078 (TCP)
Stickiness options not available for TCP protocols
443 (TCP) forwarding to 31480 (TCP)
Stickiness options not available for TCP protocols
mysql.example.com is pointing to my ELB.
I was expecting something like, from my local box i can connect to MySql if try something like :
mysql -h mysql.example.com -u root -P 80 -p
Which is not working out. Instead of NodePort if i try with LoadBalancer, its creating a new ELB for me which is working as expected.
I am not sure if this is right approach for what i want to achieve here. Please help me out if there is a way for achieving same using the Ingress with NodePort.
Kubernetes Ingress as a generic concept does not solve the issue of exposing/routing TCP/UDP services, as stated in https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md you should use custom configmaps if you want that with ingress. And please mind that it will never use hostname for routing as that is a feature of HTTP, not TCP.
I succeded to access MariaDB/MySQL hosted on Google Kubernetes Engine through ingress-nginx, using the hostname specified in the ingress created for the database Cluster IP.
As per the docs, simply create the config map and expose the port in the Service defined for the Ingress.
This helped me to figure how to set --tcp-services-configmap and --udp-services-configmap flags.
Related
I have a Docker container with MariaDB running in Microk8s (running on a single Unix machine).
# Hello World Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MARIADB_ROOT_PASSWORD
value: sa
ports:
- containerPort: 3306
These are the logs:
(...)
2021-09-30 6:09:59 0 [Note] mysqld: ready for connections.
Version: '10.6.4-MariaDB-1:10.6.4+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
Now,
connecting to port 3306 on the machine does not work.
connecting after exposing the pod with a service (any type) on port 8081 also does not work.
How can I get the connection through?
The answer has been written in comments section, but to clarify I am posting here solution as Community Wiki.
In this case problem with connection has been resolved by setting spec.selector.
The .spec.selector field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (app: nginx).
.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.
You need to use the service with proper label
example service
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: ClusterIP
you can use the service name to connect or else change the service type as LoadBalancer to expose it with IP.
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: LoadBalancer
Currently, I'm using Docker Desktop with WSL2 integration. I found that Docker Desktop automatically had created a cluster for me. It means I don't have to install and use Minikube or Kind to create cluster.
The problem is that, how could I enable Ingress Controller if I use "built-in" cluster from Docker Desktop?
I tried to create an Ingress to check if this work or not, but as my guess, it didn't work.
The YAML file I created as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
minReadySeconds: 30
selector:
matchLabels:
app: webapp
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nodejs-helloworld:v1
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: http
port: 3000
nodePort: 30090 # only for NotPort > 30,000
type: NodePort #ClusterIP inside cluster
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
spec:
defaultBackend:
service:
name: webapp-service
port:
number: 3000
rules:
- host: ingress.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 3000
I tried to access ingress.local/ but it was not successful. (I added ingress.local to point to 127.0.0.1 in host file. And the webapp worked fine at kubernetes.docker.internal:30090 )
Could you please help me to know the root cause?
Thank you.
Finally I found the way to fix. I have to deploy ingress Nginx by command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
(Follows the instruction at https://kubernetes.github.io/ingress-nginx/deploy/#docker-for-mac. It works just fine for Docker for Windows)
Now I can access http://ingress.local successfully.
You have to install an ingress-nginx controller on your cluster, so that your nodes will have an opened port 80/443.
Using helm (v3 - see documentation):
helm install --namespace kube-system nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx
Using kubectl (see documentation):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml
Then manually adding your ingresses' hostnames to /etc/hosts:
127.0.0.1 ingress.local
127.0.0.1 my.other.service.local
# ...
Then if you make a request on http://ingress.local:
the DNS resolution will route to your cluster node
then the ingress controller will serve the request on port 80
then ingress will route the request to the configured backend service
and the service will route to an available pod
The newest version of Docker Desktop for Windows already adds a hosts file entry: 127.0.0.1 kubernetes.docker.internal.
You had to do use kubernetes.docker.internal URL as a hostname in Ingress definition if you want to point to 127.0.0.1. This should be in the docs on this page kubernetes.github.io/ingress-nginx/deploy but there is no Docker Desktop for Windows section there.
Your files should look like this:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- name: http
protocol: TCP
port: 3000
nodePort: 30090
Your Ingress file should look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webapp-ingress
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
backend:
serviceName: webapp-service
servicePort: http
Then you are able to connect to app using http://kubernetes.docker.internal/.
Example you can see here: wsl2-docker-for-desktop.
I used the Docker-Desktop version to install the nginx-ingress controller
https://kubernetes.github.io/ingress-nginx/deploy/#docker-desktop
curl http://kubernetes.docker.internal/
Offcourse I've not installed any workload yet but the default ingress controller works just fine.
With Kustomize you can simply use
helmCharts:
- name: ingress-nginx
releaseName: ingress-nginx
repo: https://kubernetes.github.io/ingress-nginx
This is just to point out that Amel Mahmuzićs comment is still valid with a recent (I used the ingress-nginx Helm Chart 4.4.2) ingress deployment.
I could not get this to work for far too long (I tried to follow the Strapi fodadvisor example with Docker Desktop build in Kubernetes instead of minikube) and always received a 404 from the ingress.
However, after using this yaml with the added annotation
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: foodadvisor.backend
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: foodadvisor-backend
port:
number: 1337
- host: foodadvisor.client
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: foodadvisor-frontend
port:
number: 3000
it worked immediately. The K82 docs mention, that this annotation is deprecated.
This question already has an answer here:
Kubernetes services are not accessible through nodeport with Desktop Docker setup
(1 answer)
Closed 2 years ago.
There is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-task-tracker-deployment
spec:
selector:
matchLabels:
app: my-task-tracker
replicas: 5
template:
metadata:
labels:
app: my-task-tracker
spec:
containers:
- name: hello-world
image: shaikezam/task-tracker:1.0
ports:
- containerPort: 8080
protocol: TCP
This is the service (NodePort):
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8085
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
Now, I try to access localhost:8085 or localhost:30001, and nothing happened.
I'm running using K8S in docker desktop.
Any suggestion what I'm doing wrong?
Target port should be 8080 in service yaml if that is what your container runs on as per your deployment yaml file.
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
=======
port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
NodePort exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified. You should be able to use your application on Nodeport as well.
In your case target port should be 8080 that is what is important for app to run ,you can listen to your app on port 8085 within your cluster by changing the port field in the yaml and externally by changing the Nodeport.
I'm a super beginner with Kubernetes and I'm trying to imagine how to split my monolithic application in different micro services.
Let's say i'm writing my micro services application in Flask and each of them exposes some endpoints like:
Micro service 1:
/v1/user-accounts
Micro service 2:
/v1/savings
Micro service 3:
/v1/auth
If all of them were running as blueprints in a monolithic application all of them would be prefixed with the same IP, that is the IP of the host server my application is running on, like 10.12.234.69, eg.
http://10.12.234.69:5000/v1/user-accounts
Now, deploying those 3 "blueprints" on 3 different POD/Nodes in Kubernetes will change the IP address of each endpoint having maybe 10.12.234.69, than 10.12.234.70 or 10.12.234.75
How can i write an application that keep the URL reference constant even if the IP address changes?
Would a Load Balancer Service do the trick?
Maybe the Service Registry feature of Kubernetes does the "DNS" part for me?
I know It can sounds very obvious question but still I cannot find any reference/example to this simple problem.
Thanks in advance!
EDIT: (as follow up to simon answer)
questions:
given the fact that the Ingress service spawns a load balancer and is possible to have all the routes reachable from the http/path prefixed by the IP (http://<ADDRESS>/v1/savings) of the load balancer, how can I associate IP to the load balancer to match the ip of the pod on which flask web server is running?
in case I add other sub routes to the same paths, like /v1/savings/get and /v1/savings/get/id/<var_id> , should i update all of them in the ingress http path in order for them to be reachable by the load balancer ?
A load balancer is what you are looking for.
Kubernetes services will make your pods accessible under a given hostname cluster-internally.
If you want to make your services accessible from outside the cluster under a single IP and different paths, you can use a load balancer and Kubernetes HTTP Ingresses. They define under which domain and path a service should be mapped and can be fetched by a load balancer to build its configuration.
Example based on your micro service architecture:
Mocking applications
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-accounts
spec:
template:
metadata:
labels:
app: user-accounts
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
args:
- /bin/bash
- "-c"
- echo 'server { location /v1/user-accounts { return 200 "user-accounts"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: savings
spec:
template:
metadata:
labels:
app: savings
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/savings { return 200 "savings"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
template:
metadata:
labels:
app: auth
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/auth { return 200 "auth"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
These deployments represent your services and just return their name via HTTP under /v1/name.
Mapping applications to services
---
kind: Service
apiVersion: v1
metadata:
name: user-accounts
spec:
type: NodePort
selector:
app: user-accounts
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: savings
spec:
type: NodePort
selector:
app: savings
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: auth
spec:
type: NodePort
selector:
app: auth
ports:
- protocol: TCP
port: 80
These services create an internal IP and a domain resolving to it based on their names, mapping them to the pods found by a given selector. Applications running in the same cluster namespace will be able to reach them under user-accounts, savings and auth.
Making services reachable via load balancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
spec:
rules:
- http:
paths:
- path: /v1/user-accounts
backend:
serviceName: user-accounts
servicePort: 80
- path: /v1/savings
backend:
serviceName: savings
servicePort: 80
- path: /v1/auth
backend:
serviceName: auth
servicePort: 80
This Ingress defines under which paths the different services should be reachable. Verify your Ingress via kubectl get ingress:
# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example * 80 1m
If you are running on Google Container Engine, there is an Ingress controller running in your cluster which will spawn a Google Cloud Load Balancer when you create a new Ingress object. Under the ADDRESS column of the above output, there will be an IP displayed under which you can access your applications:
# curl http://<ADDRESS>/v1/user-accounts
user-accounts⏎
# curl http://<ADDRESS>/v1/savings
savings⏎
# curl http://<ADDRESS>/v1/auth
auth⏎
As we know ,kube-proxy is used to proxy serive that could be accessed from external network via apiserver, does kube-proxy support to proxy https service in k8s or any other solution so that we could access it via apiserver ?
You need to expose your https pods via a service of type Nodeport, then you can access the https via the defined port on any node in the cluster (master or worker) because kube-proxy will forward the requests to your pods that are part of the service. NodePorts can be in the range of 30000-32767 by default.
Example configuration for an https service and deployment with nginx:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 443
name: nginx
targetPort: 443
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 443
kube-proxy iptables mode works on IP layer(Networking layer), it does not care if the packet is http or https.