Kubernetes not creating docker container - docker

I am really having trouble debugging this and can use some help. I am successfully staring a kubernetes service and deployment using a working docker image.
My service file:
apiVersion: v1
kind: Service
metadata:
name: auth-svc
labels:
app: auth_v1
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30000
protocol: TCP
selector:
app: auth_v1
Deploy File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-deploy
labels:
app: auth_v1
spec:
revisionHistoryLimit: 5
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 3
selector:
matchLabels:
app: auth_v1
template:
metadata:
labels:
app: auth_v1
spec:
containers:
- name: auth-pod
image: index.docker.io/XXX/auth
command: [ "yarn", "start-staging" ]
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
kubectl get pods shows that the pods are up and running. I have tested jumping into the pod/conatiner with shell and tried running my application and it works. When I run kubectl describe auth-deploy I am seeing a container listed as auth-pod. However, I am not seeing any containers when I run docker ps or docker ps -a. Also, the logs for my pods show nothing. Is there something I am doing wrong?
For reference, here is my Dockerfile:
FROM node:8.11.2-alpine AS build
LABEL maintainer="info#XXX.com"
# Copy Root Dir & Set Working Dir
COPY . /src
WORKDIR /src
# Build & Start Our App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
RUN yarn build-staging
# Build Production Image Using Node Container
FROM node:8.11.2-alpine AS production
# Copy Build to Image
COPY --from=build /src/.next /src/.next/
COPY --from=build /src/production-server /src/production-server/
COPY --from=build /src/static /src/static/
COPY --from=build /src/package.json /src
WORKDIR /src
# Install Essential Pacakges & Start App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
# Expose Ports Needed
EXPOSE 3000
VOLUME [ "/src/log" ]
# Start App
CMD [ "yarn", "start-staging" ]

Is it possible that you are running docker ps on the K8s-master instead of where the pods are located?
You can find out where your pods are running by running the command below:
$ kubectl describe pod auth-deploy
It should return something similar to below (in my case it's a percona workload):
$ kubectl describe pod percona
Name: percona-b98f87dbd-svq64
Namespace: default
Node: ip-xxx-xx-x-xxx.us-west-2.compute.internal/xxx.xx.x.xxx
Get the IP, SSH into the node, and run docker ps locally from the node your container is located.
$ docker ps | grep percona
010f3d529c55 percona "docker-entrypoint.s…" 7 minutes ago Up 7 minutes k8s_percona_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
616d70e010bc k8s.gcr.io/pause-amd64:3.1 "/pause" 8 minutes ago Up 7 minutes k8s_POD_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
Another possibility is that you might be using different container runtime such as rkt, containerd, and lxd instead of docker.

Kubernetes pods are made of grouped containers and running on the dedicated node.
Kubernetes are managing directions where to create pods and
their lifecycle.
Kubernetes configuration consists of worker nodes and the master server.
The master server is able to connect to nodes, create containers,
and bond them into pods. The master node is designed to run only managing commands like kubectl, cluster state database etcd,
and others daemons required to keep cluster up and running.
docker ps
shows nothing in this case.
To get list of running pods:
kubectl get pods
You can then connect to pod already running on node:
kubectl attach -i <podname>
Back to your question.
If you are interested in how Kubernetes are working with containers including your application image and Kubernetes infrastructure,
you have to obtain node’s IP address first:
kubectl describe pod <podname> | grep ^Node:
or by:
kubectl get pods -o wide
Next connect to the node via ssh and then:
docker ps
You will see there are containers including the one you are looking for.

Related

Local kubernetes run docker pull from local image fail

My story is:
1, I create a spring-boot project, with a Dockerfile inside.
2, I successfully create the docker image IN LOCAL with above docker file.
3, I have a minikube build a K8s for my local.
4, However, when I try to apply the k8s.yaml, it tells me that there is no such docker image. Obviously my docker app search in public docker hub, so what I can do?
Below is my dockerfile
FROM openjdk:17-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
expose 8080
ENTRYPOINT ["java","-jar","/app.jar"]
Below is my k8s.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pkslow-springboot-deployment
spec:
selector:
matchLabels:
app: springboot
replicas: 2
template:
metadata:
labels:
app: springboot
spec:
containers:
- name: springboot
image: cicdstudy/apptodocker:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: springboot
name: pkslow-springboot-service
spec:
ports:
- port: 8080
name: springboot-service
protocol: TCP
targetPort: 8080
nodePort: 30080
selector:
app: springboot
type: NodePort
In Kubernetes there is no centralized built-in Container Image Registry exist.
Depending on the container runtime in the K8S cluster nodes you have, it might search first dockerhub to pull images.
Since free pull is not suggested or much allowed by Dockerhub now, it is suggested to create an account for development purposes. You will get 1 private repository and unlimited public repository which means that whatever you pushed to public repositories, there somebody can access it.
If there is no much concern on Intellectual Property issues, you can continue that free account for development purposes. But when going production you need to change that account with a service/robot account.
Create an Account on DockerHub https://id.docker.com/login/
Login into your DockerHub account locally on the machine where you are building your container image
docker login --username=yourhubusername --email=youremail#company.com
Build,re-tag and push your image once more (go to the folder where Dockerfile resides)
docker build -t mysuperimage:v1 .
docker tag mysuperimage:v1 yourhubusername/mysuperimage:v1
docker push yourhubusername/mysuperimage:v1
Create a secret for image registry credentials
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username= --docker-password= --docker-email=
Create a service account for deployment
kubectl create serviceaccount yoursupersa
Attach secret to the service account named "yoursupersa"
kubectl patch serviceaccount yoursupersa -p '{"imagePullSecrets": [{"name": "docker-registry"}]}'
Now create your application as deployment resource object in K8S
kubectl create deployment mysuperapp --image=yourhubusername/mysuperimage:v1 --port=8080
Then patch your deployment with service account which has attached registry credentials.(which will cause for re-deployment)
kubectl patch deployment mysuperapp -p '{"spec":{"template":{"spec":{"serviceAccountName":"yoursupersa"}}}}'
the last step is expose your service
kubectl expose deployment/mysuperapp
Then everything is awesome! :)
if you just want to be able to pull images from your local computer with minikube you can use eval $(minikube docker-env) this leads to all docker related commands being used on your minikube cluster to use your local docker daemon. so a pull will first look in your hosts local images instead of hub.docker.io.
more information can be found here

Kubernetes pod status ImagePullBackOff

I am trying to create a pod based on a container image from local machine not from public registry. I am retrieving the status of pod as ImagePullBackoff
Docker file
FROM tensorflow/tensorflow:latest-py3
RUN pip install -q keras==2.3.1
RUN pip install pillow
RUN mkdir -p /app/src
WORKDIR /app/src
COPY . ./
EXPOSE 31700
CMD ["python", "test.py"]
To build the docker image
docker build -t tensor-keras .
To create a pod without using yaml file
kubectl run server --image=tensor-keras:latest
Yaml file
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: tensor-keras
image: tensor-keras:latest
ports:
- containerPort: 31700
I am retreiving the status of the pod as
NAME READY STATUS RESTARTS AGE
server 0/1 ImagePullBackOff 0 27m
Help is highly appreciated thanks
By default, Kubernetes will try to pull your image from a remote container repository. In your case, your image name is not prefixed by a container repository url, so it uses default one, most of the time it is set to Docker Hub.
What is the value of the imagePullPolicy field? For you use-case it should be set to Never to use local image.
Which tool are you using to run your Kubernetes instance?
For example, with minikube, procedure to use a local image is described here: https://stackoverflow.com/a/42564211/2784039
With kind, you should use command kind load docker-image <tensor-keras:latest> o load the image inside your cluser
With k3s, using local image should work out of the box, if imagePullPolicy is set to Never

how to deploy the dist folder of npm run build in kubernetes?

I have a node.js application that I am trying to deploy to Kubernetes.To run this normally on my machine without using Kubernetes, i would run the commands npm install and npm run build and then serve the "dist" folder. Normally i would install npm's serve using "npm install -g serve" and then run "serve -s dist".This works fine.But now to deploy to Kubernetes for production how can I create my image?I mean how should the docker file for this look like?
Note: I don't want to use nginx, apache or any kind of web server.I want to do this using node/npm's server for serving the dist folder.Plz help
Dockerfile(What I have tried)
FROM node:8
WORKDIR /usr/src/app
COPY /dist
RUN npm install -g serve
serve -s dist
I am sure if this dockerfile is right.So i need guidance on how to correctly create image to serve dist folder of npm run build.Plz help?
I think that you can find tons of tutorials in the globe about customers web applications integration in Kubernetes cluster and further exposing them to the service visitors.
Actually, application containerized in Docker environment has to be ported in the particular Image from Dockerfile or build up within Docker Compose tool in order to remain all the application’s service dependencies; when the image is ready, it can be stored in public DockerHub or in isolated Private registry, thus Kubernetes container runtime then pulls this image and creates appropriate workloads(Pods) within the cluster according to the declared resource model.
I would recommend the following scenario:
Build docker image from your initial Dockerfile (I've made some correction):
FROM node:8
WORKDIR /usr/src/app
COPY dist/ ./dist/
RUN npm install -g serve
$ sudo docker image build <PATH>
Create tag related to the source image:
$ sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Export the image to DockerHub or some private registry:
$ sudo docker push [OPTIONS] NAME[:TAG]
Create relevant Kubernetes workload(Pod) and apply it in Kubernetes cluster, starting Node server inside the container, listening on 5000 port:
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
node: test
spec:
containers:
- name: node-test
image: TARGET_IMAGE[:TAG]
ports:
- containerPort: 5000
command: [ "/bin/bash", "-ce", "serve -s dist" ]
If you consider exposing the application for external cluster clients, then look at NodePort service:
$ kubectl expose po nodetest --port=5000 --target-port=5000 --type=NodePort
Update_1:
The application service then might reachable on the host machine within some specific port, you can simply retrieve this port value:
kubectl get svc nodetest -o jsonpath='{.spec.ports[0].nodePort}'
Update_2:
In order to expose NodePort service on some desired port, just apply the following manifest, approaching 30000 port assignment:
apiVersion: v1
kind: Service
metadata:
labels:
node: test
name: nodetest
spec:
ports:
- nodePort: 30000
port: 5000
protocol: TCP
targetPort: 5000
selector:
node: test
type: NodePort

Kubernetes equivalent of 'docker run -it'

I have one docker image and I am using following command to run it.
docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest
I want to run the same in Kubernetes. This is my current yaml file.
apiVersion: v1
kind: Deployment
metadata:
name: demo-deployment
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: demo.docker.cloud.com/demo/runtime:latest
ports:
- containerPort: 1976
imagePullPolicy: Never
This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks
I assume you are trying to connect a shell to your running container. Following the guide at https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/ - You would need the following commands. To apply your above configuration:
Create the pod: kubectl apply -f ./demo-deployment.yaml
Verify the Container is running: kubectl get pod demo-deployment
Get a shell to the running Container: kubectl exec -it demo-deployment -- /bin/bash
Looking at the Container definition in the API reference, the equivalent options are stdin: true and tty: true.
(None of the applications I work on have ever needed this; the documentation for stdin: talks about "reads from stdin in the container" and the typical sort of server-type processes you'd run in a Deployment don't read from stdin at all.)
kubectl run is the close match to docker run based on the requested scenario.
Some examples from Kubernetes documentation and it's purpose :
kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n
mynamespace # Run pod nginx in a specific namespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml

Kubernetes Pod stuck in CrashLoopBackOff

Dockerfile
FROM ubuntu
MAINTAINER user#gmail.com
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk
ADD build/libs/micro-service-gradle-0.0.1-SNAPSHOT.jar /var/local/
ENTRYPOINT exec java $JAVA_OPTS \
-jar /var/local/micro-service-gradle-0.0.1-SNAPSHOT.jar
EXPOSE 8080
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: micro-service-gradle
labels:
app: micro-service-gradle
spec:
replicas: 1
selector:
matchLabels:
app: micro-service-gradle
template:
metadata:
labels:
app: micro-service-gradle
spec:
containers:
- name: micro-service-gradle
image: micro-service-gradle:latest
ports:
- containerPort: 8080
Deploying spring boot application in Kubernetes . Pod is not getting created. When i check kubectl get pods. it says CrashLoopBackOff.
NAME READY STATUS RESTARTS AGE
micro-service-gradle-fc97c97b-8hwhg 0/1 CrashLoopBackOff 6 6m23s
I tried to check logs for the same container. Logs are empty
kubectl logs -p micro-service-gradle-fc97c97b-8hwhg
I created the container manually using docker run. There is no issues in image and containers works fine.
How to verify the logs for why the pods in crash status.
You need to use
kubectl describe pod micro-service-gradle-fc97c97b-8hwhg
to get the relevant logs. This should guide you to your problem.
I ran into a similar issue. When I run
kubectl describe pod <podname>
and read the events, Though the image was pulled, message outputted was 'restarting failed container'
The pod was crashing because it was not performing any task. To keep it running, I add a sleep command based on similar example in the docs
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
This SO answer also mentions running an infinite could also solve the problem
https://stackoverflow.com/a/55610769/7128032
You deployment resource looks ok. As you are able to create the container manually using, problem is with the connection to the image repository. Setup the impage pull secret and you should be able to create the pod
I faced similar issue. Just verify if your container is able to run continuously. You have to run the process in foreground to keep container running.
The possible reasons of such error are:
the application inside your pod is not starting due to an error;
the image your pod is based on is not present in the registry, or the node where your pod has been scheduled cannot pull from the registry;
some parameters of the pod has not been configured correctly
You can view what is happening by checking the events:
kubectl get events
or checking pod status:
kubectl describe po mypod-390jo50wn3-sp40r
Full explanation here: https://pillsfromtheweb.blogspot.com/2020/05/troubleshooting-kubernetes.html
I had the same problem, but using this worked for me:
image: Image:latest
command: [ "sleep" ]
args: [ "infinity" ]

Resources