EKS help needed - docker

Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance

ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.

Related

Minikube Ingress Controller (with custom domain) Not Loading

I'm trying to run a VueJS app on my local machine with minikube and Kubernetes.
Now,
I applied the YAML file.
Added the IP address of the ingress to /etc/hosts on my MacOS (M1).
neu.com does not load, nor does the IP address of the ingress controller.
What I've tried
Tried running the service with a nodeport (it loads up fine)
Removing everything, and doing the whole thing from start
(minikube addon ingress is switched on)
Here is the version info for all the tools I'm using.
kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.1
-------------------------
minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565
And this is the YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: cleo
labels:
app: cleo
spec:
replicas: 1
selector:
matchLabels:
app: cleo
template:
metadata:
labels:
app: cleo
spec:
containers:
- name: cleo
image: image-name-of-my-vuejs-app
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: cleo-service
spec:
selector:
app: cleo
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cleo-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: neu.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: cleo-service
port:
number: 80
The service and ingress are running:
Can somebody see what the problem is?
The direct access only works on Linux, the docker network is not accessible on macOS or on Windows
https://docs.docker.com/desktop/mac/networking/#known-limitations-use-cases-and-workarounds
Reference: https://github.com/kubernetes/minikube/issues/13951

Problem deploying golang app to kubernetes

So I followed this tutorial that explains how to building containerized microservices in Golang, Dockerize and Deploy to Kubernetes.
https://www.youtube.com/watch?v=H6pF2Swqrko
I got to the point that I can access my app via the minikube ip (mine is 192.168.59.100).
I set up kubernetes, I currently have 3 working pods but I can not open my golang app through kubernetes with the url that the kubectl shows me: "192.168.59.100:31705..."
^
|
here
I have a lead...
when i search "https://192.168.59.100:8443/" error 403 comes up:
Here is my deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Here is my service.yml:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: web
ports:
- port: 80
targetPort: 80
Your service's selector tries to match pods with label: app.kubernetes.io/name: web, but pods have app: web label. They do not match. The selector on service must match labels on pods. As you use deployment object, this means the same labels as in spec.template.metadata.labels.
#Szczad has correctly described the problem. I wanted to suggest a way of avoiding that problem in the future. Kustomize is a tool for building Kubernetes manifests. It is built into the kubectl command. One of its features is the ability to apply a set of common labels to your resources, including correctly filling in selectors in services and deployments.
If we simplify your Deployment to this (in deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
template:
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And your Service to this (in service.yaml):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
And we place the following kustomization.yaml in the same directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: web
resources:
- deployment.yaml
- service.yaml
Then we can deploy this application by running:
kubectl apply -k .
And this will result in the following manifests:
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: web
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: go-app-ms:latest
imagePullPolicy: IfNotPresent
name: go-web-app
ports:
- containerPort: 80
As you can see here, the app: web label has been applied to the deployment, to the deployment selector, to the pod template, and to the service selector.
Applying the labels through Kustomize like this means that you only need to change the label in one place. It makes it easier to avoid problems caused by label mismatches.

Can't access service in AKS

I've created an ACR and AKS, and pushed a container to the ACR.
I then applied the following yaml file to AKS:
apiVersion: apps/v1
kind: Deployment
metadata:
name: readit-cart
spec:
selector:
matchLabels:
app: readit-cart
template:
metadata:
labels:
app: readit-cart
spec:
containers:
- name: readit-cart
image: memicourseregistry.azurecr.io/cart:v2
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5005
---
apiVersion: v1
kind: Service
metadata:
name: readit-cart
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5005
selector:
app: readit-cart
I can run the container locally on port 5005 and it runs just fine.
In Azure portal, in the AKS resources view, I can see the service and the pod, they are both running (green).
And yet, when I try to access the public IP of the service, I get a "This site can't be reached" error.
What am I missing?
OK, it looks like the problem was that the pod run on port 80, and not 5005, even though the container run locally on 5005. Strange...
I'll look further into it.

Kubernetes container ports setup similar to docker-compose?

I'm having trouble setting up my k8s pods exactly how I want. My trouble is that I have multiple containers which listen to the same ports (80,443). In a remote machine, I normally use docker-compose with 'ports - 12345:80' to set this up. With K8s it appears from all of the examples I have found that with a container, the only option is to expose a port, not to proxy it. I know I can use reverse proxies to forward to multiple ports, but that would require the images to use different ports rather than using the same port and having the container forward the requests. Is there a way to do this in k8s?
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: xxx.xxx.xxx.xxx
selector:
app: app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 80
type: LoadBalancer
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
selector:
matchLabels:
app: app
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: app
tier: backend
track: stable
spec:
containers:
- name: app
image: image:example
ports:
- containerPort: 80
imagePullSecrets:
- name: xxxxxxx
Ideally, I would be able to have the containers on a Node listening to different ports, which the applications running in those containers continue to listen to 80/443, and my services would route to the correct container as necessary.
My load balancer is working correctly, as is my first container. Adding a second container succeeds, but the second container can't be reached. The second container uses a similar script with different names and a different image for deployment.
The answer here is adding a service for the pod where the ports are declared. Using Kompose to convert a docker-compose file, this is the result:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "5000"
port: 5000
targetPort: 80
selector:
io.kompose.service: app
status:
loadBalancer: {}
as well as
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- image: image:example
imagePullPolicy: ""
name: app
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
Some of the fluff from Kompose could be removed, but the relevant answer to this question is declaring the port and target port for the pod in a service, and exposing the targetPort as a containerPort in the deployment for the container.
Thanks to David Maze and GintsGints for the help!

Kubernetes (Minikube): environment variable

I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). lucky-word-client has to communicate with lucky-word-server. I want to inject the static Nodeport of lucky-word-server (http://192.*..100:32002) as an environment variable in my Kuberenetes deployment script of lucky-word-client. How I could do?
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
nodePort: 32002
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
Kubernetes automatically injects services as environment variables. https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
But you should not use this. This won't work unless all the services are in place when you create the pod. It is inspired by "docker" which also moved on to DNS based service discovery now. So "environment based service discovery" is a thing of the past.
Please rely on DNS service discovery. Minikube ships with kube-dns so you can just use the lucky-server hostname (or one of lucky-server[.default[.svc[.cluster[.local]]]] names). Read the documentation: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Resources