I am trying to run an image using Kubernetes with below Dockerfile
FROM centos:6.9
COPY rpms/* /tmp/
RUN yum -y localinstall /tmp/*
ENTERYPOINT service test start && /bin/bash
Now when I try to deploy this image using pod.yml as shown below,
apiVersion: v1
kind: Pod
metadata:
labels:
app: testpod
name: testpod
spec:
containers:
- image: test:v0.2
name: test
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: testpod
volumes:
- name: testod
persistentVolumeClaim:
claimName: testpod
Now when I try to create the pod the image goes into a crashloopbackoff. So how I can make the image to wait in /bin/bash on Kubernetes as when I use docker run -d test:v0.2 it work fines and keep running.
You need to attach a terminal to the running container. When starting a pod using kubectl run ... you can use -i --tty to do that. In the pod yml filke, you can add the following, to the container spec to attach tty.
stdin: true
tty: true
You can put a command like tail -f /dev/null to keep your container always be on, this could be done inside your Dockerfile or in your Kubernetes yaml file.
Related
I have a pod running Linux, I have let others use it. Now I need to save the changes made by others. Since sometimes I need to delete/restart the pod, the changes are reverted and new pod get created. So I want to save the pod container as docker image and use that image to create a pod.
I have tried kubectl debug node/pool-89899hhdyhd-bygy -it --image=ubuntu then install docker, dockerd inside but they don't have root permission to perform operations, installed crictl they where listing the containers but they don't have options to save them.
Also created a privileged docker image, created a pod from it, then used the command kubectl exec --stdin --tty app-7ff786bc77-d5dhg -- /bin/sh then tried to get running container, but it was not listing the containers. Below is the deployment i used to the privileged docker container
kind: Deployment
apiVersion: apps/v1
metadata:
name: app
labels:
app: backend-app
backend-app: app
spec:
replicas: 1
selector:
matchLabels:
app: backend-app
task: app
template:
metadata:
labels:
app: backend-app
task: app
spec:
nodeSelector:
kubernetes.io/hostname: pool-58i9au7bq-mgs6d
volumes:
- name: task-pv-storage
hostPath:
path: /run/docker.sock
type: Socket
containers:
- name: app
image: registry.digitalocean.com/my_registry/docker_app#sha256:b95016bd9653631277455466b2f60f5dc027f0963633881b5d9b9e2304c57098
ports:
- containerPort: 80
volumeMounts:
- name: task-pv-storage
mountPath: /var/run/docker.sock
Is there any way I can achieve this, get the pod container and save it as a docker image? I am using digitalocean to run my kubernetes apps, I do not ssh access to the node.
This is not a feature of Kubernetes or CRI. Docker does support snapshotting a running container to an image however Kubernetes no longer supports Docker.
Thank you all for your help and suggestions. I found a way to achieve it using the tool nerdctl - https://github.com/containerd/nerdctl.
In docker world, we can use pure docker image and use --volumes-from to connect it with container, how this works in the kubernetes
Docker image could be html files like
FROM scratch
COPY html /www
How can I mount it to the nginx pod?
BTW: Surely I can turn docker pure data image to use busybox as base image, then copy the data out using initContainers, which make the image 1M bigger, here try to see whether it is possible in k8s world
Unlike Docker's named volumes, volume mounts in Kubernetes never copy anything into the volume. You occasionally see tricks with Docker Compose setups where a Docker named volume is mounted over two containers, with the expectation that static files will be copied into the volume from one of them to be served by the other; this just doesn't work in Kubernetes unless you copy the files yourself.
For the setup you show, you have a collection of files you want to serve, and you want to have the standard nginx image serve them. Instead of trying to copy files between images, you can have a unified image that starts FROM nginx and contains your files:
FROM nginx
COPY html /usr/share/nginx/html
# Base image provides a suitable default CMD and other setup
You don't need any sort of volume to run this. Just specify it as the image: in your Deployment spec, and all of the files to be served are already compiled into the image.
You would use the volumes and volumeMounts.
UPD:
html file to mount:
$ cat index.html
<h1>HELLO</h1>
Create configMap with the content of the file:
$ kubectl create configmap nginx-index-html-configmap --from-file=index.html
configmap/nginx-index-html-configmap created
nginx pod file:
$ cat nginx-with-config.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-config
subPath: index.html
volumes:
- name: nginx-config
configMap:
name: nginx-index-html-configmap
Creating the pod:
$ kubectl create -f nginx-with-config.yaml
pod/nginx created
Checking nginx serves the file:
$ kubectl exec -it nginx -- curl 127.0.0.1
<h1>HELLO</h1>
UPD2:
You can have everything in one big happy file, no need to prep anything in advance:
$ cat nginx-with-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-index-html-configmap-2
data:
index.html: |
<h1>HELLO 2!</h1>
---
apiVersion: v1
kind: Pod
metadata:
name: nginx2
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-config
subPath: index.html
volumes:
- name: nginx-config
configMap:
name: nginx-index-html-configmap-2
$ kubectl apply -f nginx-with-config.yaml
configmap/nginx-index-html-configmap-2 created
pod/nginx2 created
$ kubectl exec -it nginx2 -- curl 127.0.0.1
<h1>HELLO 2!</h1>
I have a container that I need to configure for k8s yaml. The workflow on docker run using the terminal looks like this.:
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/myoh:v1 init *myproject*
This command creates a directory called myproject. To complete the workflow, I need to cd into this myproject folder and run:
docker run -v $(pwd):/project \
-w /project \
-p 8081:8081 \
gcr.io/base-project/myoh:v1
Any idea how to convert this to either a docker-compose or a k8s pods/deployment yaml? I have tried all that come to mind with no success.
The bind mount of the current directory can't be translated to Kubernetes at all. There's no way to connect a pod's filesystem back to your local workstation. A standard Kubernetes setup has a multi-node installation, and if it's possible to directly connect to a node (it may not be) you can't predict which node a pod will run on, and copying code to every node is cumbersome and hard to maintain. If you're using a hosted Kubernetes installation like GKE, it's even possible that the cluster autoscaler will create and delete nodes automatically, and you won't have an opportunity to manually copy things in.
You need to build your application code into a custom image. That can set the desired WORKDIR, COPY the code in, and RUN any setup commands that are required. Then you need to push that to an image repository, like GCR
docker build -t gcr.io/base-project/my-project:v1 .
docker push gcr.io/base-project/my-project:v1
Once you have that, you can create a minimal Kubernetes Deployment to run it. Set the GCR name of the image you built and pushed as its image:. You will also need a Service to make it accessible, even from other Pods in the same cluster.
Try this (untested yaml, but you will get the idea)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myoh-deployment
labels:
app: myoh
spec:
replicas: 1
selector:
matchLabels:
app: myoh
template:
metadata:
labels:
app: myoh
spec:
initContainers:
- name: init-myoh
image: gcr.io/base-project/myoh:v1
command: ['sh', '-c', "mkdir -p myproject"]
containers:
- name: myoh
image: gcr.io/base-project/myoh:v1
ports:
- containerPort: 8081
volumeMounts:
- mountPath: /projects
name: project-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I get a question about sharing Volume by Containers in one pod.
Here is my yaml, pod-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-pod
spec:
containers:
- name: tomcat
image: tomcat
imagePullPolicy: Never
ports:
- containerPort: 8080
volumeMounts:
- name: app-logs
mountPath: /usr/local/tomcat/logs
- name: busybox
image: busybox
command: ["sh", "-c", "tail -f /logs/catalina.out*.log"]
volumeMounts:
- name: app-logs
mountPath: /logs
volumes:
- name: app-logs
emptyDir: {}
create pod:
kubectl create -f pod-volume.yaml
wacth pod status:
watch kubectl get pod -n default
finally,I got this:
NAME READY STATUS RESTARTS AGE
redis-php 2/2 Running 0 15h
volume-pod 1/2 CrashLoopBackOff 5 6m49s
then,I check logs about busybox container:
kubectl logs pod/volume-pod -c busybox
tail: can't open '/logs/catalina.out*.log': No such file or directory
tail: no files
I don't know where is went wrong.
Is this an order of container start in pod, please help me, thanks
For this case:
Catalina logs file is : catalina.$(date '+%Y-%m-%d').log
And in shell script you should not put * into.
So please try:
command: ["sh", "-c", "tail -f /logs/catalina.$(date '+%Y-%m-%d').log"]
I'm running a Kubernetes cluster with minikube and my deployment (or individual Pods) won't stay running even though I specify in the Dockerfile that it should stay leave a terminal open (I've also tried it with sh). They keep getting restarted and sometimes they get stuck on a CrashLoopBackOff status before restarting again:
FROM ubuntu
EXPOSE 8080
CMD /bin/bash
My deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleeper-deploy
spec:
replicas: 10
selector:
matchLabels:
app: sleeper-world
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: sleeper-world
spec:
containers:
- name: sleeper-pod
image: kubelab
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
All in all, my workflow follows (deploy.sh):
#!/bin/bash
# Cleaning
kubectl delete deployments --all
kubectl delete pods --all
# Building the Image
sudo docker build \
-t kubelab \
.
# Deploying
kubectl apply -f sleeper_deployment.yml
By the way, I've tested the Docker Container solo using sudo docker run -dt kubelab and it does stay up. Why doesn't it stay up within Kubernetes? Is there a parameter (in the YAML file) or a flag I should be using for this special case?
1. Original Answer (but edited...)
If you are familiar with Docker, check this.
If you are looking for an equivalent of docker run -dt kubelab, try kubectl run -it kubelab --restart=Never --image=ubuntu /bin/bash. In your case, with the Docker -t flag: Allocate a pseudo-tty. That's why your Docker Container stays up.
Try:
kubectl run kubelab \
--image=ubuntu \
--expose \
--port 8080 \
-- /bin/bash -c 'while true;do sleep 3600;done'
Or:
kubectl run kubelab \
--image=ubuntu \
--dry-run -oyaml \
--expose \
--port 8080 \
-- /bin/bash -c 'while true;do sleep 3600;done'
2. Explaining what's going on (Added by Philippe Fanaro):
As stated by #David Maze, the bash process is going to exit immediately because the artificial terminal won't have anything going into it, a slightly different behavior from Docker.
If you change the restart Policy, it will still terminate, the difference is that the Pod won't regenerate or restart.
One way of doing it is (pay attention to the tabs of restartPolicy):
apiVersion: v1
kind: Pod
metadata:
name: kubelab-pod
labels:
zone: prod
version: v1
spec:
containers:
- name: kubelab-ctr
image: kubelab
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
restartPolicy: Never
However, this will not work if it is specified inside a deployment YAML. And that's because deployments force regeneration, trying to always get to the desired state. This can be confirmed in the Deployment Documentation Webpage:
Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.
3. If you really wish to force the Docker Container to Keep Running
In this case, you will need something that doesn't exit. A server-like process is one example. But you can also try something mentioned in this StackOverflow answer:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
This will keep your container alive until it is told to stop. Using trap and wait will make your container react immediately to a stop request. Without trap/wait stopping will take a few seconds.