Health Checks failing in target group for k8s application load balancer - docker

I have an EKS cluster with an application load balancer with a target group setup for each application environment. In my cluster I am building my application from a base docker image that is stored in a private ECR repository. I have confirmed that my pods are able to pull from the private ECR repo due to a secret I have setup to allow the private ECR image to be pulled. I am having a problem with the base docker image being able to get into a healthy state in the target group. I updated to containerPort in my deployment to match the port of the target group. I am not sure if that is how it needs to be configured. Below is how I defined everything for this namespace. I also have my dockerfile for the base image. Any advice how I can get a base docker image into a healthy state for me to build my application would be helpful.
dev.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev
name: dev-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: dev-app
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: dev-app
spec:
containers:
- name: dev-app
image: xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/private/base-docker-image:latest
imagePullPolicy: Always
ports:
- containerPort: 30411
imagePullSecrets:
- name: dev
---
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: dev-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: dev-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
name: dev-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: dev-service
servicePort: 80
---
dockerfile
FROM private/base-docker-image:latest
COPY . /apps
WORKDIR /apps
RUN npm run build
ENV ML_HOST=$HOST ML_PORT=$PORT ML_USER=$USER ML_PASSWORD=$PASSWORD
CMD ["npm", "run", "dockerstart"]
Registered Targets
Health Check Settings

This is a community wiki answer posted for better visibility.
As confirmed in the comments the solution is to set the targetPort to the port opened by the application which is 30411 as mentioned in the deployment's yaml configuration.

Related

Problem deploying golang app to kubernetes

So I followed this tutorial that explains how to building containerized microservices in Golang, Dockerize and Deploy to Kubernetes.
https://www.youtube.com/watch?v=H6pF2Swqrko
I got to the point that I can access my app via the minikube ip (mine is 192.168.59.100).
I set up kubernetes, I currently have 3 working pods but I can not open my golang app through kubernetes with the url that the kubectl shows me: "192.168.59.100:31705..."
^
|
here
I have a lead...
when i search "https://192.168.59.100:8443/" error 403 comes up:
Here is my deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Here is my service.yml:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: web
ports:
- port: 80
targetPort: 80
Your service's selector tries to match pods with label: app.kubernetes.io/name: web, but pods have app: web label. They do not match. The selector on service must match labels on pods. As you use deployment object, this means the same labels as in spec.template.metadata.labels.
#Szczad has correctly described the problem. I wanted to suggest a way of avoiding that problem in the future. Kustomize is a tool for building Kubernetes manifests. It is built into the kubectl command. One of its features is the ability to apply a set of common labels to your resources, including correctly filling in selectors in services and deployments.
If we simplify your Deployment to this (in deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
template:
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And your Service to this (in service.yaml):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
And we place the following kustomization.yaml in the same directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: web
resources:
- deployment.yaml
- service.yaml
Then we can deploy this application by running:
kubectl apply -k .
And this will result in the following manifests:
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: web
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: go-app-ms:latest
imagePullPolicy: IfNotPresent
name: go-web-app
ports:
- containerPort: 80
As you can see here, the app: web label has been applied to the deployment, to the deployment selector, to the pod template, and to the service selector.
Applying the labels through Kustomize like this means that you only need to change the label in one place. It makes it easier to avoid problems caused by label mismatches.

EKS help needed

Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.

Can't access service in AKS

I've created an ACR and AKS, and pushed a container to the ACR.
I then applied the following yaml file to AKS:
apiVersion: apps/v1
kind: Deployment
metadata:
name: readit-cart
spec:
selector:
matchLabels:
app: readit-cart
template:
metadata:
labels:
app: readit-cart
spec:
containers:
- name: readit-cart
image: memicourseregistry.azurecr.io/cart:v2
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5005
---
apiVersion: v1
kind: Service
metadata:
name: readit-cart
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5005
selector:
app: readit-cart
I can run the container locally on port 5005 and it runs just fine.
In Azure portal, in the AKS resources view, I can see the service and the pod, they are both running (green).
And yet, when I try to access the public IP of the service, I get a "This site can't be reached" error.
What am I missing?
OK, it looks like the problem was that the pod run on port 80, and not 5005, even though the container run locally on 5005. Strange...
I'll look further into it.

Nginx ingress controller logs keeps telling me that i have wrong pod information

I am running two nodes in kubernetes cluster. I am able to deploy my microservices with 3 replicas, and its service. Now I am trying to have nginx ingress controller to expose my service but i am getting this error from the logs:
unexpected error obtaining pod information: unable to get POD information (missing POD_NAME or POD_NAMESPACE environment variable)
I have set a namespace of development in my cluster, that is where my microservice is deploy and also nginx controller. I do not understand how nginx picks up my pods or how i am passing pods name or pod_namespace.
here is my nginx controller:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: mycha-deploy
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
and here my deployment:
#dDeployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 3
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: us.gcr.io/##########/mycha-frontend_kubernetes_rrk8s
ports:
- containerPort: 80
thank you
Your nginx ingress controller deployment yaml looks incomplete and does not have below among many other items.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Follow the installation docs and use yamls from here
To expose your service using a Nginx Ingress, you need to configure it before.
Follow the installation guide for you kubernetes installation.
You also need a service to 'group' the containers of your application.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector
...
For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
As you can see, the service will discover your containers based on the label selector configured in your deployment.
To check the container's label selector: kubectl get pods -owide -l app=mycha-app
Service yaml
Apply the follow yaml to create a service for your deployment:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
spec:
selector:
app: mycha-app <= This is the selector
ports:
- protocol: TCP
port: 8080
targetPort: 80
Check if the service is created with kubectl get svc.
Test the app using port-forwarding from your desktop at http://localhost:8080:
kubectl port-forward svc/mycha-service 8080:8080
nginx-ingress yaml
The last part is the nginx-ingress. Supposing your app has the url mycha-service.com and only the root '/' path:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-mycha-service
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mycha-service.com <= app url
http:
paths:
- path: /
backend:
serviceName: mycha-service <= Here you define what is the service that your ingress will use to send the requests.
servicePort: 80
Check the ingress: kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-mycha-service mycha-service.com XX.X.X.X 80 63s
Now you are able to reach your application using the url mycha-service.com and ADDRESS displayed by command above.
I hope it helps =)

Pods cannot communicate with each other

I have two jobs that will run only once. One is called Master and one is called Slave. As the name implies a Master pod needs some info from the slave then queries some API online.
A simple scheme on how the communicate can be done like this:
Slave --- port 6666 ---> Master ---- port 8888 ---> internet:www.example.com
To achieve this I created 5 yaml file:
A job-master.yaml for creating a Master pod:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels:
app: master-job
role: master-job
spec:
template:
metadata:
name: master
spec:
containers:
- name: master
image: registry.gitlab.com/example
command: ["python", "run.py", "-wait"]
ports:
- containerPort: 6666
imagePullSecrets:
- name: regcred
restartPolicy: Never
A service (ClusterIP) that allows the Slave to send info to the Master node on port 6666:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
A service(NodePort) that will allow the master to fetch info online:
apiVersion: v1
kind: Service
metadata:
name: master-np-service
spec:
type: NodePort
selector:
app: master-job
ports:
- protocol: TCP
port: 8888
targetPort: 8888
nodePort: 31000
A job for the Slave pod:
apiVersion: batch/v1
kind: Job
metadata:
name: slave-job
labels:
app: slave-job
spec:
template:
metadata:
name: slave
spec:
containers:
- name: slave
image: registry.gitlab.com/example2
ports:
- containerPort: 6666
#command: ["python", "run.py", "master-service.default.svc.cluster.local"]
#command: ["python", "run.py", "10.106.146.155"]
command: ["python", "run.py", "master-service"]
imagePullSecrets:
- name: regcred
restartPolicy: Never
And a service (ClusterIP) that allows the Slave pod to send the info to the Master pod:
apiVersion: v1
kind: Service
metadata:
name: slave-service
spec:
selector:
app: slave-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
But no matter what I do (as it can be seen in the job_slave.yaml file in the commented lines) they cannot communicate with each other except when I put the IP of the Master node in the command section of the Slave. Also the Master node cannot communicate with the outside world (even though I created a configMap with upstreamNameservers: | ["8.8.8.8"]
Everything is running in a minikube environment.
But I cannot pinpoint what my problem is. Any help is appreciated.
Your Job spec has two parts: a description of the Job itself, and a description of the Pods it creates. (Using a Job here is a little odd and I'd probably pick a Deployment instead, but the same applies here.) Where the Service object has a selector: that matches the labels: of the Pods.
In the YAML files you show the Jobs have correct labels but the generated Pods don't. You need to add (potentially duplicate) labels to the pod spec part:
apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels: {...}
spec:
template:
metadata:
# name: will get ignored here
labels:
app: master-job
role: master-job
You should be able to verify with kubectl describe service master-service. At the end of its output will be a line that says Endpoints:. If the Service selector and the Pod labels don't match this will say <none>; if they do match you will see the Pod IP addresses.
(You don't need a NodePort service unless you need to accept requests from outside the cluster; it could be the same as the service you use to accept requests from within the cluster. You don't need to include objects' types in their names. Nothing you've shown has any obvious relevance to communication out of the cluster.)
Try with headless service:
apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
type: ClusterIP
clusterIP: None
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
and use command: ["python", "run.py", "master-service"] in your job_slave.yaml
Make sure your master job is listening on port 6666 inside your container.

Resources