I have a Docker container with MariaDB running in Microk8s (running on a single Unix machine).
# Hello World Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MARIADB_ROOT_PASSWORD
value: sa
ports:
- containerPort: 3306
These are the logs:
(...)
2021-09-30 6:09:59 0 [Note] mysqld: ready for connections.
Version: '10.6.4-MariaDB-1:10.6.4+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
Now,
connecting to port 3306 on the machine does not work.
connecting after exposing the pod with a service (any type) on port 8081 also does not work.
How can I get the connection through?
The answer has been written in comments section, but to clarify I am posting here solution as Community Wiki.
In this case problem with connection has been resolved by setting spec.selector.
The .spec.selector field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (app: nginx).
.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.
You need to use the service with proper label
example service
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: ClusterIP
you can use the service name to connect or else change the service type as LoadBalancer to expose it with IP.
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: LoadBalancer
Related
This question already has an answer here:
Kubernetes services are not accessible through nodeport with Desktop Docker setup
(1 answer)
Closed 2 years ago.
There is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-task-tracker-deployment
spec:
selector:
matchLabels:
app: my-task-tracker
replicas: 5
template:
metadata:
labels:
app: my-task-tracker
spec:
containers:
- name: hello-world
image: shaikezam/task-tracker:1.0
ports:
- containerPort: 8080
protocol: TCP
This is the service (NodePort):
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8085
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
Now, I try to access localhost:8085 or localhost:30001, and nothing happened.
I'm running using K8S in docker desktop.
Any suggestion what I'm doing wrong?
Target port should be 8080 in service yaml if that is what your container runs on as per your deployment yaml file.
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
=======
port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
NodePort exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified. You should be able to use your application on Nodeport as well.
In your case target port should be 8080 that is what is important for app to run ,you can listen to your app on port 8085 within your cluster by changing the port field in the yaml and externally by changing the Nodeport.
I'm setting up a Presto cluster with 1 coordinator and 1 worker. I have used the same own images only with Docker and it worked.
However, when I pass to Kubernetes I get an error in the worker node when initialising Presto:
ERROR main com.facebook.presto.server.PrestoServer
Unable to create injector, see the following errors:
1) Error in custom provider, java.lang.NullPointerException
at com.facebook.airlift.discovery.client.DiscoveryBinder.bindServiceAnnouncement(DiscoveryBinder.java:79)
while locating com.facebook.airlift.discovery.client.ServiceAnnouncement annotated with #com.google.inject.internal.Element(setName=,uniqueId=146, type=MULTIBINDER, keyType=)
while locating java.util.Set
3) Error injecting constructor, java.io.UncheckedIOException: Failed to bind to /0.0.0.0:8080
at com.facebook.airlift.http.server.HttpServerInfo.(HttpServerInfo.java:48)
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: presto-cluster
namespace: presto-clu2
spec:
replicas: 1
selector:
matchLabels:
app: presto-c
template:
metadata:
labels:
app: presto-c
spec:
containers:
- name: presto-co
image: x/openjdk-presto-k:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: presto-wo
image: x/openjdk-prestoworker-k:1.0
imagePullPolicy: Always
ports:
- containerPort: 8181
service
apiVersion: v1
kind: Service
metadata:
name: presto-cluster
namespace: presto-clu2
spec:
selector:
app: presto-c
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
When going up with namespace, service and deployment only the coordinator gets operative.
It seems something related to the worker not being able to bind in port 8080 for the discovery of the coordinator. I know that inside a pod, all containers share ports and that could be the issue here, but I don't know well enough the technologies to check it and potentially change the port in the worker.
Do you have any idea of the issue?
I am trying to produce to a kafka broker which is running inside the container launched by kubernetes. I am playing with KAFKA_ADVERTISED_LISTENERES and KAFKA_LISTERNERS.
I tried setting these two env variables KAFKA_ADVERTISED_LISTENERES = PLAINTEXT://<host-ip>:9092 and KAFKA_LISTERNERS = PLAINTEXT://0.0.0.0:9092 and ran using docker-compose. And I was able to produce from an application out of the host machine.
But setting these two env-variables in Kubernetes.yml file, I get No broker list available exception.
What am I missing here?
Update:
kafka-pod.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: casb-deployment
name: kafkaservice
spec:
replicas: 1
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
#hostNetwork: true # to access docker out side of host container
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka:1.1.0
env: # for production
- name: KAFKA_ADVERTISED_LISTENERES
value: "PLAINTEXT://<host-ip>:9092"
- name: KAFKA_LISTERNERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_CREATE_TOPICS
value: "Topic1:1:1,Topic2:1:1"
- name: KAFKA_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
ports:
- name: port9092
containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
namespace: casb-deployment
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
I'm assuming you have a Kubernetes service, whose selector links the ingress flow to your Kafka Broker, that is exposing the nodePort (as opposed to clusterIP).
https://kubernetes.io/docs/concepts/services-networking/service/
So the kubernetes pod should be reachable through localhost:<nodePort>.
You can also set a Load Balancer in front of your Kubernetes cluster then you can just expose the k8s pods, i.e., allow external ingress.
Then the next step is to just leverage some DNS record so the outbound request produced by your docker-compose-based containers will go to DNS and then come back to your Kubernetes cluster through the load balancer.
I am new to Kubernetes and Nginx Ingress tools and now i am trying to host MySql service using VHost in Nginx Ingress on AWS. I have created a file something like :
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
selector:
app: mysql
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http
containerPort: 3306
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql
labels:
app: mysql
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mysql.example.com
http:
paths:
- path: /
backend:
serviceName: mysql
servicePort: 3306
My LoadBalancer (created by Nginx Ingress) port configuration looks like :
80 (TCP) forwarding to 32078 (TCP)
Stickiness options not available for TCP protocols
443 (TCP) forwarding to 31480 (TCP)
Stickiness options not available for TCP protocols
mysql.example.com is pointing to my ELB.
I was expecting something like, from my local box i can connect to MySql if try something like :
mysql -h mysql.example.com -u root -P 80 -p
Which is not working out. Instead of NodePort if i try with LoadBalancer, its creating a new ELB for me which is working as expected.
I am not sure if this is right approach for what i want to achieve here. Please help me out if there is a way for achieving same using the Ingress with NodePort.
Kubernetes Ingress as a generic concept does not solve the issue of exposing/routing TCP/UDP services, as stated in https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md you should use custom configmaps if you want that with ingress. And please mind that it will never use hostname for routing as that is a feature of HTTP, not TCP.
I succeded to access MariaDB/MySQL hosted on Google Kubernetes Engine through ingress-nginx, using the hostname specified in the ingress created for the database Cluster IP.
As per the docs, simply create the config map and expose the port in the Service defined for the Ingress.
This helped me to figure how to set --tcp-services-configmap and --udp-services-configmap flags.
I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves