I have a node.js application that I am trying to deploy to Kubernetes.To run this normally on my machine without using Kubernetes, i would run the commands npm install and npm run build and then serve the "dist" folder. Normally i would install npm's serve using "npm install -g serve" and then run "serve -s dist".This works fine.But now to deploy to Kubernetes for production how can I create my image?I mean how should the docker file for this look like?
Note: I don't want to use nginx, apache or any kind of web server.I want to do this using node/npm's server for serving the dist folder.Plz help
Dockerfile(What I have tried)
FROM node:8
WORKDIR /usr/src/app
COPY /dist
RUN npm install -g serve
serve -s dist
I am sure if this dockerfile is right.So i need guidance on how to correctly create image to serve dist folder of npm run build.Plz help?
I think that you can find tons of tutorials in the globe about customers web applications integration in Kubernetes cluster and further exposing them to the service visitors.
Actually, application containerized in Docker environment has to be ported in the particular Image from Dockerfile or build up within Docker Compose tool in order to remain all the application’s service dependencies; when the image is ready, it can be stored in public DockerHub or in isolated Private registry, thus Kubernetes container runtime then pulls this image and creates appropriate workloads(Pods) within the cluster according to the declared resource model.
I would recommend the following scenario:
Build docker image from your initial Dockerfile (I've made some correction):
FROM node:8
WORKDIR /usr/src/app
COPY dist/ ./dist/
RUN npm install -g serve
$ sudo docker image build <PATH>
Create tag related to the source image:
$ sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Export the image to DockerHub or some private registry:
$ sudo docker push [OPTIONS] NAME[:TAG]
Create relevant Kubernetes workload(Pod) and apply it in Kubernetes cluster, starting Node server inside the container, listening on 5000 port:
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
node: test
spec:
containers:
- name: node-test
image: TARGET_IMAGE[:TAG]
ports:
- containerPort: 5000
command: [ "/bin/bash", "-ce", "serve -s dist" ]
If you consider exposing the application for external cluster clients, then look at NodePort service:
$ kubectl expose po nodetest --port=5000 --target-port=5000 --type=NodePort
Update_1:
The application service then might reachable on the host machine within some specific port, you can simply retrieve this port value:
kubectl get svc nodetest -o jsonpath='{.spec.ports[0].nodePort}'
Update_2:
In order to expose NodePort service on some desired port, just apply the following manifest, approaching 30000 port assignment:
apiVersion: v1
kind: Service
metadata:
labels:
node: test
name: nodetest
spec:
ports:
- nodePort: 30000
port: 5000
protocol: TCP
targetPort: 5000
selector:
node: test
type: NodePort
Related
I am new to containerization and am having some difficulties. I have an application that consists of a React frontend, a Python backend using FastAPI, and PostgreSQL databases using SQL Alchemy for object-relational mapping. I decided to put each component inside a Docker container so that I can deploy the application on Azure in the future (I know that some people may have strong opinions on deploying the frontend and database in containers, but I am doing so because it is required by the project's requirements).
After doing this, I started working with Minikube. However, I am having problems where all the containers that should be running inside pods have the status "CrashLoopBackOff". From what I can tell, this means that the images are being pulled from Docker Hub, containers are being started but then failing for some reason.
I tried running "kubectl logs" and nothing is returned. The "kubectl describe" command, in the Events section, returns: "Warning BackOff 30s (x140 over 30m) kubelet Back-off restarting failed container."
I have also tried to minimize the complexity by just trying to run the frontend component. Here are my Dockerfile and manifest file:
Dockerfile:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
manifest file .yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: xxxtest/my-first-repo:yyy-frontend
ports:
- containerPort: 3000
I do not have a service manifest yet, and I don't think it is related to this issue.
Can anyone provide any help or tips on how to troubleshoot this issue? I would really appreciate any guidance you can offer. Thank you in advance for your time and assistance!
Have a great day!
This CrashLoopBackOff is related to a container error. If you want to fix this error, you need to see the container log, these is my tips:
The best practice in K8s is to redirect the application logs to /dev/stdout or /dev/stderr is not recommended redirect to a file so that you can use the kubectl logs <POD NAME>.
Try to clear the cache of your local containers, download and run the same image, and tag you configured in your deployment file.
If you need any environment variable to run the container locally, you'll also need those env's in your deployment file.
Always use the flag imagePullPolicy: Always mainly if you are using the same image tag. EDIT: Because the default image pull policy is IfNotPresent, if you fixed the container image, the k8s will not pull a new image version.
Docs:
ImagePullPolicy: https://kubernetes.io/docs/concepts/containers/images/
Standard Output: https://kubernetes.io/docs/concepts/cluster-administration/logging/
I'm trying to deploy a Flask python API to Kubernetes (EKS). I've got the Dockerfile setup, but with some weird things going on.
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY . /app
RUN pip3 install -r requirements.txt
EXPOSE 43594
ENTRYPOINT ["python3"]
CMD ["app.py"]
I build the image running docker build -t store-api ..
When I try running the container and hitting an endpoint, I get socker hung up. However, if I run the image doing
docker run -d -p 43594:43594 store-api
I can successfully hit the endpoint with a response.
My hunch is the port mapping.
Now having said all that, running the image in a Kubernetes pod, I cannot get anything back from the endpoint and get socket hung up.
My question is, how do I explicitly add port mapping to my Kubernetes deployment/service?
Part of the Deployment.yaml:
spec:
containers:
- image: store-api
name: store-api
ports:
- containerPort: 43594
resources: {}
volumeMounts:
- mountPath: /usr/local/bin/wait-for-it.sh
name: store-api-claim0
imagePullPolicy: Always
Service.yaml:
spec:
type: LoadBalancer
ports:
- port: 43594
protocol: TCP
targetPort: 43594
selector:
app: store-api
status:
loadBalancer: {}
If I port forward using kubectl port-forward deployment/store-api 43594:43594 and post the request to localhost:43594/ it works fine.
This is a community wiki answer posted for better visibility. Feel free to expand it.
Problem
Output for kubectl describe service <name_of_the_service> command contains Endpoints: <none>
Some theory
From Kubernetes Glossary:
Service
An abstract way to expose an application running on a set of Pods as a
network service. The set of Pods targeted by a Service is
(usually) determined by a selector. If more Pods are added or removed,
the set of Pods matching the selector will change. The Service makes
sure that network traffic can be directed to the current set of Pods
for the workload.
Endpoints
Endpoints track the IP addresses of Pods with matching selectors.
Selector:
Allows users to filter a list of resources based on labels. Selectors are applied when querying lists of resources to filter them by labels.
Solution
Labels in spec.template.metadata.labels of the Deployment should be the same as in spec.selector from the Service.
Additional information related to such issue can be found at Kubernetes site:
If the ENDPOINTS column is <none>, you should check that the
spec.selector field of your Service actually selects for
metadata.labels values on your Pods.
I am trying to create a pod based on a container image from local machine not from public registry. I am retrieving the status of pod as ImagePullBackoff
Docker file
FROM tensorflow/tensorflow:latest-py3
RUN pip install -q keras==2.3.1
RUN pip install pillow
RUN mkdir -p /app/src
WORKDIR /app/src
COPY . ./
EXPOSE 31700
CMD ["python", "test.py"]
To build the docker image
docker build -t tensor-keras .
To create a pod without using yaml file
kubectl run server --image=tensor-keras:latest
Yaml file
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: tensor-keras
image: tensor-keras:latest
ports:
- containerPort: 31700
I am retreiving the status of the pod as
NAME READY STATUS RESTARTS AGE
server 0/1 ImagePullBackOff 0 27m
Help is highly appreciated thanks
By default, Kubernetes will try to pull your image from a remote container repository. In your case, your image name is not prefixed by a container repository url, so it uses default one, most of the time it is set to Docker Hub.
What is the value of the imagePullPolicy field? For you use-case it should be set to Never to use local image.
Which tool are you using to run your Kubernetes instance?
For example, with minikube, procedure to use a local image is described here: https://stackoverflow.com/a/42564211/2784039
With kind, you should use command kind load docker-image <tensor-keras:latest> o load the image inside your cluser
With k3s, using local image should work out of the box, if imagePullPolicy is set to Never
I am really having trouble debugging this and can use some help. I am successfully staring a kubernetes service and deployment using a working docker image.
My service file:
apiVersion: v1
kind: Service
metadata:
name: auth-svc
labels:
app: auth_v1
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30000
protocol: TCP
selector:
app: auth_v1
Deploy File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-deploy
labels:
app: auth_v1
spec:
revisionHistoryLimit: 5
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 3
selector:
matchLabels:
app: auth_v1
template:
metadata:
labels:
app: auth_v1
spec:
containers:
- name: auth-pod
image: index.docker.io/XXX/auth
command: [ "yarn", "start-staging" ]
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
kubectl get pods shows that the pods are up and running. I have tested jumping into the pod/conatiner with shell and tried running my application and it works. When I run kubectl describe auth-deploy I am seeing a container listed as auth-pod. However, I am not seeing any containers when I run docker ps or docker ps -a. Also, the logs for my pods show nothing. Is there something I am doing wrong?
For reference, here is my Dockerfile:
FROM node:8.11.2-alpine AS build
LABEL maintainer="info#XXX.com"
# Copy Root Dir & Set Working Dir
COPY . /src
WORKDIR /src
# Build & Start Our App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
RUN yarn build-staging
# Build Production Image Using Node Container
FROM node:8.11.2-alpine AS production
# Copy Build to Image
COPY --from=build /src/.next /src/.next/
COPY --from=build /src/production-server /src/production-server/
COPY --from=build /src/static /src/static/
COPY --from=build /src/package.json /src
WORKDIR /src
# Install Essential Pacakges & Start App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
# Expose Ports Needed
EXPOSE 3000
VOLUME [ "/src/log" ]
# Start App
CMD [ "yarn", "start-staging" ]
Is it possible that you are running docker ps on the K8s-master instead of where the pods are located?
You can find out where your pods are running by running the command below:
$ kubectl describe pod auth-deploy
It should return something similar to below (in my case it's a percona workload):
$ kubectl describe pod percona
Name: percona-b98f87dbd-svq64
Namespace: default
Node: ip-xxx-xx-x-xxx.us-west-2.compute.internal/xxx.xx.x.xxx
Get the IP, SSH into the node, and run docker ps locally from the node your container is located.
$ docker ps | grep percona
010f3d529c55 percona "docker-entrypoint.s…" 7 minutes ago Up 7 minutes k8s_percona_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
616d70e010bc k8s.gcr.io/pause-amd64:3.1 "/pause" 8 minutes ago Up 7 minutes k8s_POD_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
Another possibility is that you might be using different container runtime such as rkt, containerd, and lxd instead of docker.
Kubernetes pods are made of grouped containers and running on the dedicated node.
Kubernetes are managing directions where to create pods and
their lifecycle.
Kubernetes configuration consists of worker nodes and the master server.
The master server is able to connect to nodes, create containers,
and bond them into pods. The master node is designed to run only managing commands like kubectl, cluster state database etcd,
and others daemons required to keep cluster up and running.
docker ps
shows nothing in this case.
To get list of running pods:
kubectl get pods
You can then connect to pod already running on node:
kubectl attach -i <podname>
Back to your question.
If you are interested in how Kubernetes are working with containers including your application image and Kubernetes infrastructure,
you have to obtain node’s IP address first:
kubectl describe pod <podname> | grep ^Node:
or by:
kubectl get pods -o wide
Next connect to the node via ssh and then:
docker ps
You will see there are containers including the one you are looking for.
Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.