I have two jobs that will run only once. One is called Master and one is called Slave. As the name implies a Master pod needs some info from the slave then queries some API online.
A simple scheme on how the communicate can be done like this:
Slave --- port 6666 ---> Master ---- port 8888 ---> internet:www.example.com
To achieve this I created 5 yaml file:
A job-master.yaml for creating a Master pod:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels:
app: master-job
role: master-job
spec:
template:
metadata:
name: master
spec:
containers:
- name: master
image: registry.gitlab.com/example
command: ["python", "run.py", "-wait"]
ports:
- containerPort: 6666
imagePullSecrets:
- name: regcred
restartPolicy: Never
A service (ClusterIP) that allows the Slave to send info to the Master node on port 6666:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
A service(NodePort) that will allow the master to fetch info online:
apiVersion: v1
kind: Service
metadata:
name: master-np-service
spec:
type: NodePort
selector:
app: master-job
ports:
- protocol: TCP
port: 8888
targetPort: 8888
nodePort: 31000
A job for the Slave pod:
apiVersion: batch/v1
kind: Job
metadata:
name: slave-job
labels:
app: slave-job
spec:
template:
metadata:
name: slave
spec:
containers:
- name: slave
image: registry.gitlab.com/example2
ports:
- containerPort: 6666
#command: ["python", "run.py", "master-service.default.svc.cluster.local"]
#command: ["python", "run.py", "10.106.146.155"]
command: ["python", "run.py", "master-service"]
imagePullSecrets:
- name: regcred
restartPolicy: Never
And a service (ClusterIP) that allows the Slave pod to send the info to the Master pod:
apiVersion: v1
kind: Service
metadata:
name: slave-service
spec:
selector:
app: slave-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
But no matter what I do (as it can be seen in the job_slave.yaml file in the commented lines) they cannot communicate with each other except when I put the IP of the Master node in the command section of the Slave. Also the Master node cannot communicate with the outside world (even though I created a configMap with upstreamNameservers: | ["8.8.8.8"]
Everything is running in a minikube environment.
But I cannot pinpoint what my problem is. Any help is appreciated.
Your Job spec has two parts: a description of the Job itself, and a description of the Pods it creates. (Using a Job here is a little odd and I'd probably pick a Deployment instead, but the same applies here.) Where the Service object has a selector: that matches the labels: of the Pods.
In the YAML files you show the Jobs have correct labels but the generated Pods don't. You need to add (potentially duplicate) labels to the pod spec part:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels: {...}
spec:
template:
metadata:
# name: will get ignored here
labels:
app: master-job
role: master-job
You should be able to verify with kubectl describe service master-service. At the end of its output will be a line that says Endpoints:. If the Service selector and the Pod labels don't match this will say <none>; if they do match you will see the Pod IP addresses.
(You don't need a NodePort service unless you need to accept requests from outside the cluster; it could be the same as the service you use to accept requests from within the cluster. You don't need to include objects' types in their names. Nothing you've shown has any obvious relevance to communication out of the cluster.)
Try with headless service:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
type: ClusterIP
clusterIP: None
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
and use command: ["python", "run.py", "master-service"] in your job_slave.yaml
Make sure your master job is listening on port 6666 inside your container.
Related
So I followed this tutorial that explains how to building containerized microservices in Golang, Dockerize and Deploy to Kubernetes.
https://www.youtube.com/watch?v=H6pF2Swqrko
I got to the point that I can access my app via the minikube ip (mine is 192.168.59.100).
I set up kubernetes, I currently have 3 working pods but I can not open my golang app through kubernetes with the url that the kubectl shows me: "192.168.59.100:31705..."
^
|
here
I have a lead...
when i search "https://192.168.59.100:8443/" error 403 comes up:
Here is my deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Here is my service.yml:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: web
ports:
- port: 80
targetPort: 80
Your service's selector tries to match pods with label: app.kubernetes.io/name: web, but pods have app: web label. They do not match. The selector on service must match labels on pods. As you use deployment object, this means the same labels as in spec.template.metadata.labels.
#Szczad has correctly described the problem. I wanted to suggest a way of avoiding that problem in the future. Kustomize is a tool for building Kubernetes manifests. It is built into the kubectl command. One of its features is the ability to apply a set of common labels to your resources, including correctly filling in selectors in services and deployments.
If we simplify your Deployment to this (in deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
template:
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And your Service to this (in service.yaml):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
And we place the following kustomization.yaml in the same directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: web
resources:
- deployment.yaml
- service.yaml
Then we can deploy this application by running:
kubectl apply -k .
And this will result in the following manifests:
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: web
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: go-app-ms:latest
imagePullPolicy: IfNotPresent
name: go-web-app
ports:
- containerPort: 80
As you can see here, the app: web label has been applied to the deployment, to the deployment selector, to the pod template, and to the service selector.
Applying the labels through Kustomize like this means that you only need to change the label in one place. It makes it easier to avoid problems caused by label mismatches.
Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.
I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done
I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). lucky-word-client has to communicate with lucky-word-server. I want to inject the static Nodeport of lucky-word-server (http://192.*..100:32002) as an environment variable in my Kuberenetes deployment script of lucky-word-client. How I could do?
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
nodePort: 32002
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
Kubernetes automatically injects services as environment variables. https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
But you should not use this. This won't work unless all the services are in place when you create the pod. It is inspired by "docker" which also moved on to DNS based service discovery now. So "environment based service discovery" is a thing of the past.
Please rely on DNS service discovery. Minikube ships with kube-dns so you can just use the lucky-server hostname (or one of lucky-server[.default[.svc[.cluster[.local]]]] names). Read the documentation: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I am using Kubernetes to run a Docker service. This is a defective service that requires a restart everyday. For multiple reasons we can't programmatically solve the problem and just restarting the docker everyday will do.
When I migrated to Kubernetes I noticed I can't do "docker restart [mydocker]" but as the docker is a deployment with reCreate strategy I just need to delete the pod to have Kubernetes create a new one.
Can I automate this task of deleting the Pod, or an alternative one to restart it, using a CronTask in Kubernetes?
Thanks for any directions/examples.
Edit: My current deployment yml:
apiVersion: v1
kind: Service
metadata:
name: et-rest
labels:
app: et-rest
spec:
ports:
- port: 9080
targetPort: 9080
nodePort: 30181
selector:
app: et-rest
tier: frontend
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: et-rest
labels:
app: et-rest
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: et-rest
tier: frontend
spec:
containers:
- image: et-rest-image:1.0.21
name: et-rest
ports:
- containerPort: 9080
name: et-rest
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
You can use a scheduled job pod:
A scheduled job pod has build in cron behavior making it possible to restart jobs, combined with the time-out behavior, it leads to your required behavior or restarting your app every X hours.
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: app-with-timeout
spec:
schedule: 0 * * * ?
jobTemplate:
spec:
activeDeadlineSeconds: 3600*24
template:
spec:
containers:
- name: yourapp
image: yourimage