I'm trying to deploy Vue JS app on k8s. The frontend was wrapped into an image with Nginx as a static handler. This configuration working when I access just a service with cluster IP and node port but not working with ingress. Tell me please what I'm doing wrong?
Frontend image
FROM node:latest as build-stage
WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
FROM nginx as production-stage
RUN mkdir /client
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: vue-app-deployment
labels:
app: vue-app
spec:
replicas: 1
selector:
matchLabels:
pod: vue-app
template:
metadata:
labels:
pod: vue-app
spec:
containers:
- name: vue-app
image: frontend-image
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
here is my service
apiVersion: v1
kind: Service
metadata:
name: vue-app-service
spec:
selector:
pod: vue-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
and ingress
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-shiny-site.com
http:
paths:
- path: /
backend:
serviceName: vue-app-service
servicePort: 80
Related
Not sure what I am missing, trying to set up a simple traefik environment with kubernetes proxying the errm/cheese:cheddar docker container to cheddar.minikube
Prerequisite:
have minikube setup
git clone # personal repo that is now deleted. see solution below
# setup.sh will delete current minikube environment then recreate it
./setup.sh
# add IP to minikube
echo `minikube ip` cheddar.minikube | sudo tee -a /etc/hosts
after running
minikube delete
minikube start
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
kubectl apply -f traefik-deployment.yaml -f traefik-whoami.yaml
with...
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
hostNetwork: true
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --accesslog
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
- --providers.kubernetescrd
ports:
- name: web
containerPort: 8000
# hostPort: 80
- name: websecure
containerPort: 4443
# hostPort: 443
- name: admin
containerPort: 8080
# hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
traefik-whoami.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/notls`)
kind: Rule
services:
- name: whoami
port: 80
I was able to get a simple container working with traefik in kubernetes at:
echo `minikube ip`/notls
I do not understand how to configure ports correctly for a k8s deployment.
Assume there is a nextJS application which listens to port 3003 (default is 3000). I build the docker image:
FROM node:16.14.0
RUN apk add dumb-init
# ...
EXPOSE 3003
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD npx next start -p 3003
So in this Dockerfile there are two places defining the port value 3003. Is this needed?
Then I define this k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
The deployment is not working correctly. Calling domain.com shows me a 503 Service Temporarily Unavailable error.
If I do a port forward on the pod, I can see the working app at localhost:3003. I cannot create a port forward on the service.
So obviously I'm doing something wrong with the ports. Can someone explain which value has to be set and why?
You are missing labels from the deployment and the selector from the service. Try this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Service: https://kubernetes.io/docs/concepts/services-networking/service/
Labels and selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
You can name your label keys and values anything you like, you could even have a label as whatever: something instead of app: example but these are some recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
https://kubernetes.io/docs/reference/labels-annotations-taints/
I am working on an API that I will deploy with Kubernetes and I want to test it locally.
I created the Docker image, successfully tested it locally, and pushed it to a public Docker registry. Now I would like to deploy in a Kubernetes cluster and there are no errors being thrown, however, I am not able to make a request to the endpoint exposed by the Minikube tunnel.
Steps to reproduce:
Start Minikube container: minikube start --ports=127.0.0.1:30000:30000
Create deployment and service: kubectl apply -f fastapi.yaml
Start minikube tunnel: minikube service fastapi-server
Encountered the following error: 192.168.49.2 took too long to respond.
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
main.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
async def root():
return {"status": "OK"}
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
fastapi.yaml:
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
replicas: 1
selector:
matchLabels:
app: fastapi-server
template:
metadata:
labels:
app: fastapi-server
spec:
containers:
- name: fastapi-server
image: smdf/fastapi-test
ports:
- containerPort: 8000
name: http
protocol: TCP
---
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
Your problem is that you did not set the service selector:
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
selector: <------------- Missing part
app: fastapi-server <-------------
type: NodePort <------------- Set the type to NodePort
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
How to check if your service is defined properly?
I checked to see if there are any endpoints, and there weren't any since you did not "attach" the service to your deployment
kubectl get endpoints -A
For more info you can read this section under my GitHub
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/05-Services
I have been trying to get my kubernetes to launch my web application on a browser through my local host. When I try to open local host it times out and I have tried using minikube service --url and that also does not work. All of my deployment, and service pods are running. I have also tried port forward and changing the type to nodeport. I have provided my yaml, docker, and svc code.
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
# For more information, please refer to https://aka.ms/vscode-docker—python
FROM python:3.8-slim-buster
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR lapp
COPY . .
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode—docker-python—configure-containers
RUN adduser -u 5678 --disab1ed-password --gecos "" appuser && chown -R appuser /app
USER appuser
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Name: mywebsite
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mywebsite
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.161.241
IPs: 10.99.161.241
Port: http 8743/TCP
TargetPort: 5000/TCP
NodePort: http 32697/TCP
Endpoints: 172.17.0.3:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
it's due to your container running on port 8000
But your service is forwarding the traffic to 5000
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: **8000**
deployment should be
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: **8000**
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
You need to change the targetPort in SVC
and containerPort in Deployment
Or else
change the
EXPOSE 8000 to 5000 and command to run the application on 5000
CMD ["python", "manage.py", "runserver", "0.0.0.0:5000"]
Dont forget to docker build one more time after the above changes.
I have an EKS cluster with an application load balancer with a target group setup for each application environment. In my cluster I am building my application from a base docker image that is stored in a private ECR repository. I have confirmed that my pods are able to pull from the private ECR repo due to a secret I have setup to allow the private ECR image to be pulled. I am having a problem with the base docker image being able to get into a healthy state in the target group. I updated to containerPort in my deployment to match the port of the target group. I am not sure if that is how it needs to be configured. Below is how I defined everything for this namespace. I also have my dockerfile for the base image. Any advice how I can get a base docker image into a healthy state for me to build my application would be helpful.
dev.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev
name: dev-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: dev-app
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: dev-app
spec:
containers:
- name: dev-app
image: xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/private/base-docker-image:latest
imagePullPolicy: Always
ports:
- containerPort: 30411
imagePullSecrets:
- name: dev
---
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: dev-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: dev-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
name: dev-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: dev-service
servicePort: 80
---
dockerfile
FROM private/base-docker-image:latest
COPY . /apps
WORKDIR /apps
RUN npm run build
ENV ML_HOST=$HOST ML_PORT=$PORT ML_USER=$USER ML_PASSWORD=$PASSWORD
CMD ["npm", "run", "dockerstart"]
Registered Targets
Health Check Settings
This is a community wiki answer posted for better visibility.
As confirmed in the comments the solution is to set the targetPort to the port opened by the application which is 30411 as mentioned in the deployment's yaml configuration.