I am working on an API that I will deploy with Kubernetes and I want to test it locally.
I created the Docker image, successfully tested it locally, and pushed it to a public Docker registry. Now I would like to deploy in a Kubernetes cluster and there are no errors being thrown, however, I am not able to make a request to the endpoint exposed by the Minikube tunnel.
Steps to reproduce:
Start Minikube container: minikube start --ports=127.0.0.1:30000:30000
Create deployment and service: kubectl apply -f fastapi.yaml
Start minikube tunnel: minikube service fastapi-server
Encountered the following error: 192.168.49.2 took too long to respond.
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
main.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
async def root():
return {"status": "OK"}
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
fastapi.yaml:
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
replicas: 1
selector:
matchLabels:
app: fastapi-server
template:
metadata:
labels:
app: fastapi-server
spec:
containers:
- name: fastapi-server
image: smdf/fastapi-test
ports:
- containerPort: 8000
name: http
protocol: TCP
---
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
Your problem is that you did not set the service selector:
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
selector: <------------- Missing part
app: fastapi-server <-------------
type: NodePort <------------- Set the type to NodePort
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
How to check if your service is defined properly?
I checked to see if there are any endpoints, and there weren't any since you did not "attach" the service to your deployment
kubectl get endpoints -A
For more info you can read this section under my GitHub
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/05-Services
Related
I have deployed etcd server(3.5.0) as a container on Kubernetes and am able to access the /version and /metrics endpoints via the fqdn in HttpProxy on my local machine as below:
https://etcd.apps.domain.net/version
https://etcd.apps.domain.net/metrics
I am on windows platform. I am using etcdctl (3.5.0) which I have downloaded from here: https://github.com/etcd-io/etcd/releases/tag/v3.5.0 , to connect to the server as below:
etcdctl.exe --endpoints=https://etcd.apps.domain.net:443 endpoint health
But the client is not able to connect to the server and gives the below error:
{"level":"warn","ts":1650617630.997635,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1650617632.298635,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":1,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1650617633.598635,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":2,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1650617634.607135,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":3,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
https://etcd.apps.domain.net:443 is unhealthy: failed to commit proposal: context deadline exceeded
Error: unhealthy cluster
Now I know the cluster is not unhealthy because I can access the version endpoint on my local machine : https://etcd.apps.domain.net/version. The output is:
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
My kube deployment file is as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: etcd
labels:
app: etcd
spec:
replicas: 1
selector:
matchLabels:
app: etcd
template:
metadata:
labels:
app: etcd
spec:
securityContext:
runAsUser: 999
fsGroup: 999
containers:
- name: etcd
image: <image path>
imagePullPolicy: Always
resources:
limits:
ephemeral-storage: 1000Mi
requests:
ephemeral-storage: 1000Mi
ports:
- containerPort: 2379
---
apiVersion: v1
kind: Service
metadata:
name: etcd
labels:
app: etcd
spec:
ports:
- name: https
port: 2379
targetPort: 2379
protocol: TCP
selector:
app: etcd
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: etcd
spec:
virtualhost:
fqdn: etcd.apps.domain.net
tls:
secretName: ingress-contour/ingress-contour-default-ssl-cert
routes:
- conditions:
- prefix: /
services:
- name: etcd
port: 2379
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: etcd.allow-ingress
spec:
podSelector:
matchLabels:
app: etcd
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: ingress-contour
ports:
- protocol: TCP
port: 2379
My docker image for etcd :
FROM artifactory/lrh:8.4-202109
RUN mkdir -p /app
RUN chown -R 999:999 /app
COPY tar /usr/bin/
COPY etcd-v3.5.0-linux-amd64.tar.gz /app/
RUN yum -y install gzip
RUN tar -xf /app/etcd-v3.5.0-linux-amd64.tar.gz -C /app --strip 1
ENV ETCD_DATA_DIR=/app
EXPOSE 2379
ENTRYPOINT ["/app/etcd", "-advertise-client-urls", "https://etcd.apps.domain.net:2379", "-listen-client-urls", "http://0.0.0.0:2379"]
I have a proxy running in localhost:8000 which gives me access to my service.
Previously, I was using docker-compose.yml, and by including http_proxy=http://host.docker.internal:8000 I was able to reach my service from within the container.
I have switched to Kubernetes using Minikube. I have started minikube with:
minikube start \
--docker-env HTTP_PROXY=http://host.minikube.internal:8000 \
--docker-env NO_PROXY=stats,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.39.0/24
The service and deployment yaml:
apiVersion: v1
kind: Service
metadata:
name: service-name
namespace: service-space
spec:
type: NodePort
ports:
- port: 5432
targetPort: 5432
selector:
run: service-name
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-name
labels:
app: service-name
spec:
replicas: 1
selector:
matchLabels:
app: service-name
template:
metadata:
labels:
app: service-name
spec:
containers:
- name: service-name
image: container-image:latest
ports:
- containerPort: 5432
From within the service-name container, when I trying to reach the service I get the following error:
curl: (6) Could not resolve host: ...
I did this setup to test kubernetes with minikube 'Set up Ingress on Minikube' and everything worked fine.
Then I tried to do the same with my own app and am having a problem after configuring all the steps.
The steps that I did to setup my app and kubernetes are:
Create an app that works on port 5000
Containarized the app in a docker image and upload to the minikube image registry
Created a deployment for kubernetes with my container
Run kubectl port-forward pod/app 5000 and everyting works fine
Created a service with type Nodeport to expose the deployment
Run kubectl port-forward service/app-service 5000 and everyting works fine
Created an ingress to expose the service
Run curl app.info and it returns 502 bad gateway
Tryied again kubectl port-forward service/app-service 5000 and it still works fine
Check minikube service app-service --url and tried the result URL and it returns Connection refused, the equivalent url in the demo setup that I did previously works fine so it looks like something is wrong in this step even when doing the port-forwarding works correctly.
kind: Deployment
metadata:
namespace: echo-app
name: app
labels:
app: echo
tier: services
spec:
replicas: 1
selector:
matchLabels:
tier: services
template:
metadata:
labels:
tier: services
spec:
containers:
- name: echo-api
image: echo/api:v1.0.0b39c8f9a
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: echo-app
spec:
type: NodePort
selector:
tier: services
ports:
- protocol: TCP
port: 5000
targetPort: 5000
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: echo-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: echo.info
http:
- paths:
path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 5000
I have been trying to get my kubernetes to launch my web application on a browser through my local host. When I try to open local host it times out and I have tried using minikube service --url and that also does not work. All of my deployment, and service pods are running. I have also tried port forward and changing the type to nodeport. I have provided my yaml, docker, and svc code.
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
# For more information, please refer to https://aka.ms/vscode-docker—python
FROM python:3.8-slim-buster
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR lapp
COPY . .
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode—docker-python—configure-containers
RUN adduser -u 5678 --disab1ed-password --gecos "" appuser && chown -R appuser /app
USER appuser
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Name: mywebsite
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mywebsite
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.161.241
IPs: 10.99.161.241
Port: http 8743/TCP
TargetPort: 5000/TCP
NodePort: http 32697/TCP
Endpoints: 172.17.0.3:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
it's due to your container running on port 8000
But your service is forwarding the traffic to 5000
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: **8000**
deployment should be
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: **8000**
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
You need to change the targetPort in SVC
and containerPort in Deployment
Or else
change the
EXPOSE 8000 to 5000 and command to run the application on 5000
CMD ["python", "manage.py", "runserver", "0.0.0.0:5000"]
Dont forget to docker build one more time after the above changes.
I'm trying to deploy Vue JS app on k8s. The frontend was wrapped into an image with Nginx as a static handler. This configuration working when I access just a service with cluster IP and node port but not working with ingress. Tell me please what I'm doing wrong?
Frontend image
FROM node:latest as build-stage
WORKDIR /app
COPY . ./
RUN npm install
RUN npm run build
FROM nginx as production-stage
RUN mkdir /client
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: vue-app-deployment
labels:
app: vue-app
spec:
replicas: 1
selector:
matchLabels:
pod: vue-app
template:
metadata:
labels:
pod: vue-app
spec:
containers:
- name: vue-app
image: frontend-image
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
here is my service
apiVersion: v1
kind: Service
metadata:
name: vue-app-service
spec:
selector:
pod: vue-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
and ingress
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-shiny-site.com
http:
paths:
- path: /
backend:
serviceName: vue-app-service
servicePort: 80