I've created an ACR and AKS, and pushed a container to the ACR.
I then applied the following yaml file to AKS:
apiVersion: apps/v1
kind: Deployment
metadata:
name: readit-cart
spec:
selector:
matchLabels:
app: readit-cart
template:
metadata:
labels:
app: readit-cart
spec:
containers:
- name: readit-cart
image: memicourseregistry.azurecr.io/cart:v2
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5005
---
apiVersion: v1
kind: Service
metadata:
name: readit-cart
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5005
selector:
app: readit-cart
I can run the container locally on port 5005 and it runs just fine.
In Azure portal, in the AKS resources view, I can see the service and the pod, they are both running (green).
And yet, when I try to access the public IP of the service, I get a "This site can't be reached" error.
What am I missing?
OK, it looks like the problem was that the pod run on port 80, and not 5005, even though the container run locally on 5005. Strange...
I'll look further into it.
Related
I'm trying to run a VueJS app on my local machine with minikube and Kubernetes.
Now,
I applied the YAML file.
Added the IP address of the ingress to /etc/hosts on my MacOS (M1).
neu.com does not load, nor does the IP address of the ingress controller.
What I've tried
Tried running the service with a nodeport (it loads up fine)
Removing everything, and doing the whole thing from start
(minikube addon ingress is switched on)
Here is the version info for all the tools I'm using.
kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.1
-------------------------
minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565
And this is the YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: cleo
labels:
app: cleo
spec:
replicas: 1
selector:
matchLabels:
app: cleo
template:
metadata:
labels:
app: cleo
spec:
containers:
- name: cleo
image: image-name-of-my-vuejs-app
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: cleo-service
spec:
selector:
app: cleo
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cleo-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: neu.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: cleo-service
port:
number: 80
The service and ingress are running:
Can somebody see what the problem is?
The direct access only works on Linux, the docker network is not accessible on macOS or on Windows
https://docs.docker.com/desktop/mac/networking/#known-limitations-use-cases-and-workarounds
Reference: https://github.com/kubernetes/minikube/issues/13951
I am using k8s on Docker desktop on Mac. I have applied the below yaml file and the deployment got success. But when I access "localhost:8888", I get page not found and can't see nginx default homepage. (images attached)
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-np
spec:
type: NodePort
selector:
app: my-nginx-np
ports:
- port: 8888
targetPort: 80```
[enter image description here][1]
[enter image description here][2]
[1]: https://i.stack.imgur.com/edbG9.png
[2]: https://i.stack.imgur.com/Ak6UZ.png
Your service is not pointing to the Nginx deployment (you are using the wrong selectors). Try to use the following service
apiVersion: v1
kind: Service
metadata:
name: my-nginx-np
spec:
type: NodePort
selector:
app: my-nginx
ports:
- port: 8888
targetPort: 80
I have an EKS cluster with an application load balancer with a target group setup for each application environment. In my cluster I am building my application from a base docker image that is stored in a private ECR repository. I have confirmed that my pods are able to pull from the private ECR repo due to a secret I have setup to allow the private ECR image to be pulled. I am having a problem with the base docker image being able to get into a healthy state in the target group. I updated to containerPort in my deployment to match the port of the target group. I am not sure if that is how it needs to be configured. Below is how I defined everything for this namespace. I also have my dockerfile for the base image. Any advice how I can get a base docker image into a healthy state for me to build my application would be helpful.
dev.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev
name: dev-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: dev-app
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: dev-app
spec:
containers:
- name: dev-app
image: xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/private/base-docker-image:latest
imagePullPolicy: Always
ports:
- containerPort: 30411
imagePullSecrets:
- name: dev
---
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: dev-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: dev-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
name: dev-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: dev-service
servicePort: 80
---
dockerfile
FROM private/base-docker-image:latest
COPY . /apps
WORKDIR /apps
RUN npm run build
ENV ML_HOST=$HOST ML_PORT=$PORT ML_USER=$USER ML_PASSWORD=$PASSWORD
CMD ["npm", "run", "dockerstart"]
Registered Targets
Health Check Settings
This is a community wiki answer posted for better visibility.
As confirmed in the comments the solution is to set the targetPort to the port opened by the application which is 30411 as mentioned in the deployment's yaml configuration.
Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.
I am deploying java service from VSTS to Docker and then to Kubernetes. I am able to push and run image successfully from ACR. After pushing into Kubernetes, I am not able to browse the service from Kubernetes.
apiVersion: apps/v1
kind : Deployment
metadata :
name: xservice
labels:
app: xserviceapi
spec:
template:
metadata:
labels:
app: xserviceapi
type : Back-end
spec:
containers:
- name: xservice
image : acr.azurecr.io/xservice:latest
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: regcre
replicas: 1
selector:
matchLabels:
app: xserviceapi
---
apiVersion: v1
kind: Service
metadata:
name: xservice
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: xserviceapi
As #OnurYartaşi mentioned, you should be able to reach your service using 40.68.134.174 IP address.