Access local docker images with k3s - docker

Is there any way to access local docker images directly (without using 'docker save') with k3s?
Like minikube accesses local docker images after running this command
eval $(minikube docker-env)
A little bit of background.
I have set up a machine using Ubuntu 19.04 as 'master' and raspberry pi as 'worker' using k3s. Now, I want to use a local image to create a deployment on the worker node.
Update
Adding screenshot as said in the comment below.
Screenshot for the image listings

You can start k3s like this sudo k3s server --docker which will use host's Docker rather than containerd. This will make all local images available to k3s and if your ImagePullPolicy is IfNotPresent k3s will use it rather than trying to pull it.

While this doesn't make all Docker images available,, a useful work-around is to export local Docker images and import them to your ctr:
docker save my/local-image:v1.2.3 | sudo k3s ctr images import -
This will make them available on-demand to your k3s cluster.
This is useful for users who cannot get k3s server to work with the --docker flag.

Related

Can Docker CLI, Podman and other similar tools have shared local storage for images?

I recently started using podman and realized that images pulled via docker doesn't become available for use to podman and vice-versa. For example:-
If I pull the image using docker CLI, as shown below
docker pull registry.access.redhat.com/ubi7-minimal
and If I want to use the same image with podman or buildah, turns out I cannot
[riprasad#localhost ~]$ podman inspect registry.access.redhat.com/ubi7-minimal
Error: error getting image "registry.access.redhat.com/ubi7-minimal": unable to find 'registry.access.redhat.com/ubi7-minimal' in local storage: no such image
I understand that this is because both podman and docker uses a different storage location and hence the image pulled down via docker doesn't becomes available for use with podman and vice-versa.
[riprasad#localhost ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/ubi7-minimal latest fc8736ea8c5b 5 weeks ago 81.5MB
[riprasad#localhost ~]$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
Is there a way to mitigate this issue, and somehow make docker and podman work inter-changeably on the very same image, irrespective of whether it has been pulled down via docker or podman ??
Docker and Podman do not sure the same storage. They can not, because Docker controls locking to its storage within the daemon. While Podman, Buildah, CRI-O, Skopeo all can share content, because they use the file system.
Podman and the other tools can work with the docker-daemon storage indirectly, via the "docker-daemon" transport.
Something like:
podman run docker-daemon:alpine echo hello
Should work.
Note, that podman is pulling the image out of the docker daemon and is storing the image in containers/storage, and then running the container, it is not using the Docker storage directly.
You can also do
podman push myimage docker-daemon:myimage
To copy an image from containers/storage into the docker daemon.
Adding to #rhatdan's post
podman run docker://alpine echo hello
This worked for me.
For more details: Here->

docker pull equivalent in kubectl

Docker provides a way to run the container using docker run
Or just pull the container image using docker pull
Found a doc showing mapping between docker commands and kubectl.
Can't find docker pull equivalent in this doc.
If there is no any such equivalent to docker pull, then is there any way to just pull an image using kubectl cli.
In short - no, there is not.
And why would there be? Kubernetes is an orchestration tool for Docker, it will automatically pull pods for you if it needs them, so there's really no need to have a command to pull containers manually.
I think there isn't a kubectl ... equivalent and some of the reasons might be:
they are not equivalent 🙂. When you docker pull an image, you are planning to use it afterwards on your docker host. When you kubectl ... a deployment, you want the platform to schedule everything. For example if you have many worker nodes and the replicas are going to be scheduled to only two of them, then the other nodes don't have to pull the image.
kubectl is a tool that talks to the API server to control the cluster. It would be wrong to make it also responsible for container images (see, Leaky Abstractions) since you have available a lower level tool that talks to the Container Runtime Interface for that: crictl.
k8s-master:~$ crictl --help
NAME:
crictl - client for CRI
USAGE:
crictl [global options] command [command options] [arguments...]
VERSION:
v1.12.0
COMMANDS:
attach Attach to a running container
create Create a new container
exec Run a command in a running container
version Display runtime version information
images List images
inspect Display the status of one or more containers
inspecti Return the status of one or more images
inspectp Display the status of one or more pods
logs Fetch the logs of a container
port-forward Forward local port to a pod
ps List containers
pull Pull an image from a registry
...
pic from: www.aquasec.com/wiki/display/containers/Kubernetes+Architecture+101
what takes place with container run-times under the hood is complicated and keeps evolving. Think about this, people started creating Kubernetes clusters and the container engine used was Docker. Then Docker adopted containerd so we had Kubernetes on top of Docker on top of containerd, which caused problems like this:
Users won't see Kubernetes pulled images with the docker images command... And vice versa, Kubernetes won't see images created by docker pull, docker load or docker build commands...
source / more details: Kubernetes Containerd Integration Goes GA
crictl pull <image name>
There is no need to pull by kubernetes in cli.
Why?
Becuase when you run kubectl create -f template.yml it containe an image and it cjecked that the image is exist or not. If it does not exist it pull image automatically.
You will not find equivalent of docker pull in Kubernetes because this command is related to images management. Explanation below.
One of Docker features is abbility to create Images. You can create your own image using Dockerfile (docker build .) or pull from Docker Hub which contains many pre-built images.
If you use pull command it will just download image, it will not deploy any container.
$ docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 5 months ago 1.84kB
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you see, $ docker pull will only display download image. As Docker is also responsible for image management you can pull or push images to repository (DockerHub).
To create container in Docker you have to use $ docker run. This command will automatically download image and run container.
$ docker run --name mynginx -p 80:80 -d nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
...
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4abf804611a8 nginx "nginx -g 'daemon of…" 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp mynginx
In short:
Create adds a writeable container on top of your image and sets it up for running whatever command you specified in your CMD. The container ID is reported back but it’s not started.
Start will start any stopped containers. This includes freshly created containers.
Run is a combination of create and start. It creates the container and starts it.
Kubernetes is container-orchestration system so it is not responsible for creating or edit images. That is why you will not find equivalent of docker pull (download only image).
Commands like kubectl apply -f <deployment> with image inside YAML file or kubectl run nginx --image=nginx are based on images from DockerHub (more like docker create).
Hope it helped.
It could be a bit tricky, but it is possible to achieve similar to docker pull results using kubectl. You just need to know how to exit the containers with zero exit code.
The idea is to pull several images on all nodes in the Kubernetes cluster.
For doing this you could create a DaemonSet which will try to create Pods on every applicable node in the cluster. To pull several images at once, just add several initContainers to the DaemonSet template. ImagePullPolicy set to IfNotPresent, restartPolicy set to Never. Set command individually to each initContainer to make it exit successfully. You need something like sh -c "exit 0", just ensure that container has sh binary inside, or use another possible command that usually gives you zero exit code: <appname_binary> version or <appname_binary> --help.
After applying that DaemonSet to the cluster, Kubernetes creates Pods from DaemonSet templates on each node and runs each initContainer in the Pod in order of presence. Before starting each container kubelet pulls the image required to run that container.
When you see that all DaemonSet Pods completed successfuly - you can be sure that on every node you have all images, required for running those containers.
You can play with nodeAffinity or taints/tolerations if you want to run the DaemonSet only on specific nodes.

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

What are the best practices to manage and move Docker containers?

We are evaluating Docker to use for our application,so really like to know the following questions:
What are the best practices to move docker images and container between different machine?
Also how to manage containers and images in production environment across different regions?
First of all Docker architecture has a push pull mechanism using Registry(which may be private or public(like docker Hub).
1) Answer to your first Question- Moving Docker images and container between machines?
You can create tar file of images or container and then move the tar file between your machines.
Check using docker ps -a,then based on your requirement use any one of the following:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
68d9619a7a91 ubuntu:14.04 "/bin/bash" 10 seconds ago Exited
For Container move - Use docker export and import:
$ docker export 68d9619a7a91 > ubuntu-container.tar
$ docker import - update < ubuntu-container.tar
For Image move -- Use docker save and load:
$ docker images
$ docker save -o image.tar
$ docker load < image.tar
2) Second question- Managing containers in production environment?
a) It is better to have your own private registry managing all the images that you need for your containers. Suppose you have a dedicated node as Docker registry where all your docker images will stay.Now you can push your changes or updates of the images to the registry and then accordingly pull this images from this registry to your machine that will run the containers from ths images.
b) Another great way of managing images/container across cluster and different cloud provider is to use a Kubernetes(open sourced by Google). Although we have not implemented Kubernetes,but just started looking into its documentation,and it looks very promising if you are using docker containers and cloud.

Pull image from another Docker Machine

Is it possible to pull an image from another docker machine without having to install the docker repository?
I got 2 docker machines for development and i would like to deploy an image on the second docker machine that i have build with the first one.
Is this possible?
If you have created your docker servers using docker-machine then you could do an export/import using remote access to the docker agents on each server.
docker $(docker-machine config server1) export exampleimage:1.0 | docker $(docker-machine config server2) import - exampleimage:1.0
But....it would be a lot simpler to just rebuild the image on the second server, using the same Dockerfile.

Resources