Kubernetes: Specify a tarball docker image to run pod - docker

I have saved a docker image as a tar file locally using the command,
docker save -o ./dockerImage:version.tar docker.io/image:latest-1.0
How to specify this file in my pod.yaml to use this tarball and start the pod instead of pulling / already pulled image to launch the container.
Current pod.yaml file:
apiVersion: myApp/v1
kind: myKind
metadata:
name: myPod2
spec:
baseImage: docker.io/image
version: latest-1.0
I want similar to this
apiVersion: myApp/v1
kind: myKind
metadata:
name: myPod2
spec:
baseImage: localDockerImage.tar:latest-1.0
version: latest-1.0

There's no direct way to achieve that in Kubernetes.
See the discussions here: https://github.com/kubernetes/kubernetes/issues/1668
They have finally closed that issue because of the following reasons:
Given that there are a number of ways to do this (your own cluster startup scripts, run a daemonset to side load your custom images, create VM images with images pre-loaded, run a cluster-local docker registry), and the fact that there have been no substantial updates in over two years, I'm going to close this as obsolete.

Related

How to attache a volume to kubernetes pod container like in docker?

I am new to Kubernetes but familiar with docker.
Docker Use Case
Usually, when I want to persist data I just create a volume with a name then attach it to the container, and even when I stop it then start another one with the same image I can see the data persisting.
So this is what i used to do in docker
docker volume create nginx-storage
run -it --rm -v nginx-storage:/usr/share/nginx/html -p 80:80 nginx:1.14.2
then I:
Create a new html file in /usr/share/nginx/html
Stop container
Run the same docker run command again (will create another container with same volume)
html file exists (which means data persisted in that volume)
Kubernetes Use Case
Usually, when I work with Kubernetes volumes I specify a PVC (PersistentVolumeClaim) and PV (PersistentVolume) using hostPath which will bind mount directory or a file from the host machine to the container.
what I want to do is reproduce the same behavior specified in the previous example (Docker Use Case) so how can I do that? Is Kubernetes creating volumes process is different from Docker? and if possible providing a YAML file would help me understand.
To a first approximation, you can't (portably) do this. Build your content into the image instead.
There are two big practical problems, especially if you're running a production-oriented system on a cloud-hosted Kubernetes:
If you look at the list of PersistentVolume types, very few of them can be used in ReadWriteMany mode. It's very easy to get, say, an AWSElasticBlockStore volume that can only be used on one node at a time, and something like this will probably be the default cluster setup. That means you'll have trouble running multiple pod replicas serving the same (static) data.
Once you do get a volume, it's very hard to edit its contents. Consider the aforementioned EBS volume: you can't edit it without being logged into the node on which it's mounted, which means finding the node, convincing your security team that you can have root access over your entire cluster, enabling remote logins, and then editing the file. That's not something that's actually possible in most non-developer Kubernetes setups.
The thing you should do instead is build your static content into a custom image. An image registry of some sort is all but required to run Kubernetes and you can push this static content server into the same registry as your application code.
FROM nginx:1.14.2
COPY . /usr/share/nginx/html
# Base image has a working CMD, no need to repeat it
Then in your deployment spec, set image: registry.example.com/nginx-frontend:20220209 or whatever you've chosen to name this build of this image, and do not use volumes at all. You'd deploy this the same way you deploy other parts of your application; you could use Helm or Kustomize to simplify the update process.
Correspondingly, in the plain-Docker case, I'd avoid volumes here. You don't discuss how files get into the nginx-storage named volume; if you're using imperative commands like docker cp or debugging tools like docker exec, those approaches are hard to script and are intrinsically local to the system they're running on. It's not easy to copy a Docker volume from one place to another. Images, though, can be pushed and pulled through a registry.
I managed to do that by creating a PVC only this is how I did it (with an Nginx image):
nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
nginx-deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template: # template for the pods
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Once I run kubectl apply on the PVC then on the deployment going to localhost:30080 will show 404 not found page means that all data in the /usr/share/nginx/html was deleted once the container gets started and that's because it's bind mounting a dir from the k8s cluster node to that container as a volume:
/usr/share/nginx/html <-- dir in volume
/var/lib/k8s-pvs/nginx2-data/pvc-9ba811b0-e6b6-4564-b6c9-4a32d04b974f <-- dir from node (was automatically created)
I tried adding a new file into that container in the html dir as a new index.html file, then deleted the container, a new container was created by the pod and checking localhost:30080 worked with the newly created home page
I tried deleting the deployment and reapplying it (without deleting the PVC) checked localhost:30080 and everything still persists.
An alternative solution specified in the comments kubernetes.io/docs/tasks/configure-pod-container/… by
larsks

What is the proper way to reference a Dockerfile when creating a Deployment in Kubernetes?

Let's say I have a deployment that looks something like this:
apiVersion: v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
template:
kind: Pod
metadata: myapp-pod
labels:
apptype: front-end
containers:
- name: nginx
containers: <--what is supposed to go here?-->
How do I properly build a container using an existing Dockerfile without having to push a build image up to Docker hub?
Kubernetes can't build images. You all but are required to use an image registry. This isn't necessarily Docker Hub: the various public-cloud providers (AWS, Google, Azure) all have their own registry offerings, there are some third-party ones out there, or you can run your own.
If you're using a cloud-hosted Kubernetes installation (EKS, GKE, ...) the "right" way to do this is to push your built image to the corresponding image registry (ECR, GCR, ...) before you run it.
docker build -t gcr.io/my/image:20201116 .
docker push gcr.io/my/image:20201116
containers:
- name: anything
image: gcr.io/my/image:20201116
There are some limited exceptions to this in a very local development environment. For example, if you're using Minikube as a local Kubernetes installation, you can point docker commands at it, so that docker build builds an image inside the Kubernetes context.
eval $(minikube docker-env)
docker build -t my-image:20201116 .
containers:
- name: anything
image: my-image:20201116 # matches `docker build -t` option
imagePullPolicy: Never # since you manually built it inside the minikube Docker
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment check this out.
Make sure you give a good read at the documentation :)

yaml for creating a deployment fails

After building a docker image named my-http I can create a deployment from it with
kubectl create deploy http-deployment --image=my-http
This will not pull the image because imagePullPolicy is Always.
So then run
kubectl edit deploy http-deployment
and change the imagePullPolicy to Never, then it runs.
But for automation purposes I've created a yaml to create the deployment and set the imagePullPolicy at the same time.
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-deployment
spec:
replicas: 3
selector:
matchLabels:
app: http
template:
metadata:
labels:
app: http
spec:
containers:
- name: my-http
image: my-http
imagePullPolicy: Never
ports:
- containerPort: 8080
Then apply -f and the pods start running but after a while a Crashloopbackoff starts with the message
container image my-http already present on machine
Apparently it has something to do with the container port but what to use for that port to get it running? There is no container running...
edit: the image already present is just informational, this is the last line in the pod description
Warning BackOff 7s (x8 over 91s) kubelet, minikube Back-off
restarting failed container
If you using kubernetes cluster your images only available on the nodes that you build the images.
You have to push images to container registries then the kubernetes will try to pull the image to node that will running the container.
If you want to run the container in the nodes that you build the images you have to use NodeSelector, or PodAffinity.
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
Your image is probably private image which Kubernetes can't pull if you didn't specify imagePullSecrets.
This shouldn't be the problem however, because imagePullPolicy: Never would just use the image on the nodes. You can diagnose real problem by either kubectl describe pod pod_name or getting logs of the previous pod with --previous flag because newer pod may not have encountered the problem.

Docker for Windows Kubernetes pod gets ImagePullBackOff after creating a new deployment

I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:
docker build -t myimage -f myDockerFile .
(the above successfully creates an image in the docker local registry)
kubectl run myapp --image=myimage:latest
(as far as I understand, this is the same as using the kubectl create deployment command)
The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:
NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?
I just had the exact same problem. Boils down to the imagePullPolicy:
PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
Specifically, the part that says: Defaults to Always if :latest tag is specified.
That means, you created a local image, but, because you use the :latest it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:
kubectl run myapp --image=myimage:latest --image-pull-policy Never
or
kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
I had this same ImagePullBack error while running a pod deployment with a YAML file, also on Docker Desktop.
For anyone else that finds this via Google (like I did), the imagePullPolicy that Lucas mentions above can also be set in the deployment yaml file. See the spec.templage.spec.containers.imagePullPolicy in the yaml snippet below (3 lines from the bottom).
I added that and my app deployed successfully into my local kube cluser, using the kubectl yaml deploy command: kubectl apply -f .\Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: node-web-app:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
You didn't specify where myimage:latest is hosted, but essentially ImagePullBackoff means that I cannot pull the image because either:
You don't have networking setup in your Docker VM that can get to your Docker registry (Docker Hub?)
myimage:latest doesn't exist in your registry or is misspelled.
myimage:latest requires credentials (you are pulling from a private registry). You can take a look at this to configure container credentials in a Pod.

Is there any better way for changing the source code of a container instead of creating a new image?

What is the best way to change the source code of my application running as Kubernetes pod without creating a new version of image so I can avoid time taken for pushing and pulling image from repository?
You may enter the container using bash if it installed on the image and modify it using -
docker exec -it <CONTAINERID> /bin/bash
However, this isn’t advisable solution. If your modifications succeed, you should update the Dockerfile accordingly or else you risk losing your work and ability to share it with others.
Have the container pull from git on creation?
Setup CI/CD?
Another way to achieve a similar result is to leave the application source outside of the container and mount the application source folder in the container.
This is especially useful when developing web applications in environments such as PHP: your container is setup with your Apache/PHP stack and /var/www/html is configured to mount your local filesystem.
If you are using minikube, it already mounts a host folder within the minikube VM. You can find the exact paths mounted, depending on your setup, here:
https://kubernetes.io/docs/getting-started-guides/minikube/#mounted-host-folders
Putting it all together, this is what a nginx deployment would look like on kubernetes, mounting a local folder containing the web site being displayed:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: sources
readOnly: true
volumes:
- name: sources
hostPath:
path: /Users/<username>/<source_folder>
type: Directory
Finally we have resolved the issue. Here, we changed our image repository from docker hub to aws ecr in the same region where we are running kubernetes cluster. Now, it is taking very lesstime for pushing/pulling images.
This is definitely not recommended for production.
But if your intention is local development with kubernetes, take a look at these tools:
Telepresence
Telepresence is an open source tool that lets you run a single service
locally, while connecting that service to a remote Kubernetes cluster.
Kubectl warp
Warp is a kubectl plugin that allows you to execute your local code
directly in Kubernetes without slow image build process.
The kubectl warp command runs your command inside a container, the same
way as kubectl run does, but before executing the command, it
synchronizes all your files into the container.
I think it should be taken as process to create new images for each deployment.
Few benefits:
immutable images: no intervention in running instance this will ensure image run in any environment
rollback: if you encounter issues in new version, rollback to previous version
dependencies: new versions may have new dependencies

Resources