I'm setting up a Presto cluster with 1 coordinator and 1 worker. I have used the same own images only with Docker and it worked.
However, when I pass to Kubernetes I get an error in the worker node when initialising Presto:
ERROR main com.facebook.presto.server.PrestoServer
Unable to create injector, see the following errors:
1) Error in custom provider, java.lang.NullPointerException
at com.facebook.airlift.discovery.client.DiscoveryBinder.bindServiceAnnouncement(DiscoveryBinder.java:79)
while locating com.facebook.airlift.discovery.client.ServiceAnnouncement annotated with #com.google.inject.internal.Element(setName=,uniqueId=146, type=MULTIBINDER, keyType=)
while locating java.util.Set
3) Error injecting constructor, java.io.UncheckedIOException: Failed to bind to /0.0.0.0:8080
at com.facebook.airlift.http.server.HttpServerInfo.(HttpServerInfo.java:48)
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: presto-cluster
namespace: presto-clu2
spec:
replicas: 1
selector:
matchLabels:
app: presto-c
template:
metadata:
labels:
app: presto-c
spec:
containers:
- name: presto-co
image: x/openjdk-presto-k:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: presto-wo
image: x/openjdk-prestoworker-k:1.0
imagePullPolicy: Always
ports:
- containerPort: 8181
service
apiVersion: v1
kind: Service
metadata:
name: presto-cluster
namespace: presto-clu2
spec:
selector:
app: presto-c
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
When going up with namespace, service and deployment only the coordinator gets operative.
It seems something related to the worker not being able to bind in port 8080 for the discovery of the coordinator. I know that inside a pod, all containers share ports and that could be the issue here, but I don't know well enough the technologies to check it and potentially change the port in the worker.
Do you have any idea of the issue?
Related
Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.
I am running two nodes in kubernetes cluster. I am able to deploy my microservices with 3 replicas, and its service. Now I am trying to have nginx ingress controller to expose my service but i am getting this error from the logs:
unexpected error obtaining pod information: unable to get POD information (missing POD_NAME or POD_NAMESPACE environment variable)
I have set a namespace of development in my cluster, that is where my microservice is deploy and also nginx controller. I do not understand how nginx picks up my pods or how i am passing pods name or pod_namespace.
here is my nginx controller:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: mycha-deploy
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
and here my deployment:
#dDeployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 3
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: us.gcr.io/##########/mycha-frontend_kubernetes_rrk8s
ports:
- containerPort: 80
thank you
Your nginx ingress controller deployment yaml looks incomplete and does not have below among many other items.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Follow the installation docs and use yamls from here
To expose your service using a Nginx Ingress, you need to configure it before.
Follow the installation guide for you kubernetes installation.
You also need a service to 'group' the containers of your application.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector
...
For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
As you can see, the service will discover your containers based on the label selector configured in your deployment.
To check the container's label selector: kubectl get pods -owide -l app=mycha-app
Service yaml
Apply the follow yaml to create a service for your deployment:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
spec:
selector:
app: mycha-app <= This is the selector
ports:
- protocol: TCP
port: 8080
targetPort: 80
Check if the service is created with kubectl get svc.
Test the app using port-forwarding from your desktop at http://localhost:8080:
kubectl port-forward svc/mycha-service 8080:8080
nginx-ingress yaml
The last part is the nginx-ingress. Supposing your app has the url mycha-service.com and only the root '/' path:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-mycha-service
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mycha-service.com <= app url
http:
paths:
- path: /
backend:
serviceName: mycha-service <= Here you define what is the service that your ingress will use to send the requests.
servicePort: 80
Check the ingress: kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-mycha-service mycha-service.com XX.X.X.X 80 63s
Now you are able to reach your application using the url mycha-service.com and ADDRESS displayed by command above.
I hope it helps =)
I have two jobs that will run only once. One is called Master and one is called Slave. As the name implies a Master pod needs some info from the slave then queries some API online.
A simple scheme on how the communicate can be done like this:
Slave --- port 6666 ---> Master ---- port 8888 ---> internet:www.example.com
To achieve this I created 5 yaml file:
A job-master.yaml for creating a Master pod:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels:
app: master-job
role: master-job
spec:
template:
metadata:
name: master
spec:
containers:
- name: master
image: registry.gitlab.com/example
command: ["python", "run.py", "-wait"]
ports:
- containerPort: 6666
imagePullSecrets:
- name: regcred
restartPolicy: Never
A service (ClusterIP) that allows the Slave to send info to the Master node on port 6666:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
A service(NodePort) that will allow the master to fetch info online:
apiVersion: v1
kind: Service
metadata:
name: master-np-service
spec:
type: NodePort
selector:
app: master-job
ports:
- protocol: TCP
port: 8888
targetPort: 8888
nodePort: 31000
A job for the Slave pod:
apiVersion: batch/v1
kind: Job
metadata:
name: slave-job
labels:
app: slave-job
spec:
template:
metadata:
name: slave
spec:
containers:
- name: slave
image: registry.gitlab.com/example2
ports:
- containerPort: 6666
#command: ["python", "run.py", "master-service.default.svc.cluster.local"]
#command: ["python", "run.py", "10.106.146.155"]
command: ["python", "run.py", "master-service"]
imagePullSecrets:
- name: regcred
restartPolicy: Never
And a service (ClusterIP) that allows the Slave pod to send the info to the Master pod:
apiVersion: v1
kind: Service
metadata:
name: slave-service
spec:
selector:
app: slave-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
But no matter what I do (as it can be seen in the job_slave.yaml file in the commented lines) they cannot communicate with each other except when I put the IP of the Master node in the command section of the Slave. Also the Master node cannot communicate with the outside world (even though I created a configMap with upstreamNameservers: | ["8.8.8.8"]
Everything is running in a minikube environment.
But I cannot pinpoint what my problem is. Any help is appreciated.
Your Job spec has two parts: a description of the Job itself, and a description of the Pods it creates. (Using a Job here is a little odd and I'd probably pick a Deployment instead, but the same applies here.) Where the Service object has a selector: that matches the labels: of the Pods.
In the YAML files you show the Jobs have correct labels but the generated Pods don't. You need to add (potentially duplicate) labels to the pod spec part:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels: {...}
spec:
template:
metadata:
# name: will get ignored here
labels:
app: master-job
role: master-job
You should be able to verify with kubectl describe service master-service. At the end of its output will be a line that says Endpoints:. If the Service selector and the Pod labels don't match this will say <none>; if they do match you will see the Pod IP addresses.
(You don't need a NodePort service unless you need to accept requests from outside the cluster; it could be the same as the service you use to accept requests from within the cluster. You don't need to include objects' types in their names. Nothing you've shown has any obvious relevance to communication out of the cluster.)
Try with headless service:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
type: ClusterIP
clusterIP: None
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
and use command: ["python", "run.py", "master-service"] in your job_slave.yaml
Make sure your master job is listening on port 6666 inside your container.
I am trying to produce to a kafka broker which is running inside the container launched by kubernetes. I am playing with KAFKA_ADVERTISED_LISTENERES and KAFKA_LISTERNERS.
I tried setting these two env variables KAFKA_ADVERTISED_LISTENERES = PLAINTEXT://<host-ip>:9092 and KAFKA_LISTERNERS = PLAINTEXT://0.0.0.0:9092 and ran using docker-compose. And I was able to produce from an application out of the host machine.
But setting these two env-variables in Kubernetes.yml file, I get No broker list available exception.
What am I missing here?
Update:
kafka-pod.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: casb-deployment
name: kafkaservice
spec:
replicas: 1
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
#hostNetwork: true # to access docker out side of host container
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka:1.1.0
env: # for production
- name: KAFKA_ADVERTISED_LISTENERES
value: "PLAINTEXT://<host-ip>:9092"
- name: KAFKA_LISTERNERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_CREATE_TOPICS
value: "Topic1:1:1,Topic2:1:1"
- name: KAFKA_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
ports:
- name: port9092
containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
namespace: casb-deployment
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
I'm assuming you have a Kubernetes service, whose selector links the ingress flow to your Kafka Broker, that is exposing the nodePort (as opposed to clusterIP).
https://kubernetes.io/docs/concepts/services-networking/service/
So the kubernetes pod should be reachable through localhost:<nodePort>.
You can also set a Load Balancer in front of your Kubernetes cluster then you can just expose the k8s pods, i.e., allow external ingress.
Then the next step is to just leverage some DNS record so the outbound request produced by your docker-compose-based containers will go to DNS and then come back to your Kubernetes cluster through the load balancer.
I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves