Docker provides a way to run the container using docker run
Or just pull the container image using docker pull
Found a doc showing mapping between docker commands and kubectl.
Can't find docker pull equivalent in this doc.
If there is no any such equivalent to docker pull, then is there any way to just pull an image using kubectl cli.
In short - no, there is not.
And why would there be? Kubernetes is an orchestration tool for Docker, it will automatically pull pods for you if it needs them, so there's really no need to have a command to pull containers manually.
I think there isn't a kubectl ... equivalent and some of the reasons might be:
they are not equivalent 🙂. When you docker pull an image, you are planning to use it afterwards on your docker host. When you kubectl ... a deployment, you want the platform to schedule everything. For example if you have many worker nodes and the replicas are going to be scheduled to only two of them, then the other nodes don't have to pull the image.
kubectl is a tool that talks to the API server to control the cluster. It would be wrong to make it also responsible for container images (see, Leaky Abstractions) since you have available a lower level tool that talks to the Container Runtime Interface for that: crictl.
k8s-master:~$ crictl --help
NAME:
crictl - client for CRI
USAGE:
crictl [global options] command [command options] [arguments...]
VERSION:
v1.12.0
COMMANDS:
attach Attach to a running container
create Create a new container
exec Run a command in a running container
version Display runtime version information
images List images
inspect Display the status of one or more containers
inspecti Return the status of one or more images
inspectp Display the status of one or more pods
logs Fetch the logs of a container
port-forward Forward local port to a pod
ps List containers
pull Pull an image from a registry
...
pic from: www.aquasec.com/wiki/display/containers/Kubernetes+Architecture+101
what takes place with container run-times under the hood is complicated and keeps evolving. Think about this, people started creating Kubernetes clusters and the container engine used was Docker. Then Docker adopted containerd so we had Kubernetes on top of Docker on top of containerd, which caused problems like this:
Users won't see Kubernetes pulled images with the docker images command... And vice versa, Kubernetes won't see images created by docker pull, docker load or docker build commands...
source / more details: Kubernetes Containerd Integration Goes GA
crictl pull <image name>
There is no need to pull by kubernetes in cli.
Why?
Becuase when you run kubectl create -f template.yml it containe an image and it cjecked that the image is exist or not. If it does not exist it pull image automatically.
You will not find equivalent of docker pull in Kubernetes because this command is related to images management. Explanation below.
One of Docker features is abbility to create Images. You can create your own image using Dockerfile (docker build .) or pull from Docker Hub which contains many pre-built images.
If you use pull command it will just download image, it will not deploy any container.
$ docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 5 months ago 1.84kB
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you see, $ docker pull will only display download image. As Docker is also responsible for image management you can pull or push images to repository (DockerHub).
To create container in Docker you have to use $ docker run. This command will automatically download image and run container.
$ docker run --name mynginx -p 80:80 -d nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
...
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4abf804611a8 nginx "nginx -g 'daemon of…" 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp mynginx
In short:
Create adds a writeable container on top of your image and sets it up for running whatever command you specified in your CMD. The container ID is reported back but it’s not started.
Start will start any stopped containers. This includes freshly created containers.
Run is a combination of create and start. It creates the container and starts it.
Kubernetes is container-orchestration system so it is not responsible for creating or edit images. That is why you will not find equivalent of docker pull (download only image).
Commands like kubectl apply -f <deployment> with image inside YAML file or kubectl run nginx --image=nginx are based on images from DockerHub (more like docker create).
Hope it helped.
It could be a bit tricky, but it is possible to achieve similar to docker pull results using kubectl. You just need to know how to exit the containers with zero exit code.
The idea is to pull several images on all nodes in the Kubernetes cluster.
For doing this you could create a DaemonSet which will try to create Pods on every applicable node in the cluster. To pull several images at once, just add several initContainers to the DaemonSet template. ImagePullPolicy set to IfNotPresent, restartPolicy set to Never. Set command individually to each initContainer to make it exit successfully. You need something like sh -c "exit 0", just ensure that container has sh binary inside, or use another possible command that usually gives you zero exit code: <appname_binary> version or <appname_binary> --help.
After applying that DaemonSet to the cluster, Kubernetes creates Pods from DaemonSet templates on each node and runs each initContainer in the Pod in order of presence. Before starting each container kubelet pulls the image required to run that container.
When you see that all DaemonSet Pods completed successfuly - you can be sure that on every node you have all images, required for running those containers.
You can play with nodeAffinity or taints/tolerations if you want to run the DaemonSet only on specific nodes.
Related
I used docker ps/docker ps -a/docker ps -n 1 all not showing my first image.
But it after I using docker pull hello-world it saying it installed successfully
docker pull pulls an image (and all the layers that make it up) to your local machine, but doesn't run anything.
docker ps lists containers on your system.
Once you run that container (using docker run hello-world), you'll see it in dokcer ps.
To view the image you pulled, you could use docker images.
As you find from the previous answer docker pull will download the image (mostly from the docker hub) and when trying to pull next time, it finds the image already in your local machine. To see all the images you have locally, use docker image ls.
I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.
I have a docker compose file with a container that uses the latest tag:
code_site:
image: code_site:latest
deploy:
restart_policy:
condition: any
volumes:
- ../../data_to_backup/code_site/drupal_sites:/drupal_www/sites
- drupal_core:/drupal_www/core
- php_fpm_socket:/var/run/php-fpm7
networks:
- main_net
I have a process which will rebuild the container. I use it when I want to make changes to my site.
I am investigating a problem where the docker stack deploy command:
docker stack deploy --compose-file=docker-compose.yml code_site
(The stack name happens to match the image name but this is a coincidence.)
If I go through the following process:
Delete the code_site:latest image (rmi code_site:latest)
Rebuild a new code_site:latest image
Redeploy the stack
It will bring up the OLD version of the container. This is confusing, especially as I have deleted the old version.
I have gone further and I deleted the code_site image then I ran the stack deploy command.
The stack deploys successfully still running the old version of the container.
I can use the docker images command and verify that there is no container named code_site:latest so I have no idea how the stack could possibly deploy.
Can anyone explain how the image is coming back from the dead, and what method I should use to get rid of it permanently and force docker stack to use the real image?
Thanks
Robert
Update 1
code_site is a locally built image
I am running on a swarm but there is only one node in the swarm
Docker stack deploy will pull the latest image from your docker registry, since '--resolve-image always' is set by default, therefore always resolving to the latest image. If you don't want this run
docker stack deploy --resolve-image never [rest of deploy command]
However, to make it easier to maintain changes, I would suggest using version tags for your images in your registry, such as code_site:v1 when code changes push new version tagged code_site:v2 and deploy the new image/version using the normal deploy command without --resolve-image never.
Also if you plan to add nodes to your swarm you will need to change your command to docker stack deploy --with-registry-auth to allow the other nodes to pull the image from your repo.
Update 1
If you are 100% sure you do not have an image in your docker registry with the name code_site:latest then
this should work
Run:
docker rm $(docker ps -aq)
docker volume rm $(docker volume ls --format {{.ID}})
To check for lingering containers/services:
List Existing Services
docker service ls
List Running Stacks
docker stack ls
List All Containers
docker ps -aq
Then redeploy with deploy command
Alternatively to update your service without removing old containers/volumes/images, you can just update your image, then update your service without removing anything.
This will update your service using the new image... no need to stop, remove, then update... Just update.
Docker Service Update Command
Run after new image is built:
docker service update [SERVICE NAME] --image [IMAGE NAME] --force
I tried to build a docker image. Then on docker images command, the list displays:
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> eaaf8e203bd4 1 min ago 253.2 MB
Is there a way in my Dockerfile to specify a name for this build? Or at docker build . command line?
Another question: I want to upload via SFTP the docker container on my production server and run it. Where are the containers stored?
You should use the -t option of docker build
docker build -t name:tag
The tag is optional.
I want to upload via SFTP the docker container on my production server and run it.
You should "upload" the image, and by that I mean push it to a docker registry running on your server.
You could also commit a running container into an intermediate image (which would freeze the running state of the container, but would not preserve the volume data, if one was declared in that container)
Then copy that archive, and docker import it.
See "How to move docker containers between different hosts".
Once imported, see "Where are docker images stored on the host machine?".
(/var/lib/docker/containers for containers)
We are evaluating Docker to use for our application,so really like to know the following questions:
What are the best practices to move docker images and container between different machine?
Also how to manage containers and images in production environment across different regions?
First of all Docker architecture has a push pull mechanism using Registry(which may be private or public(like docker Hub).
1) Answer to your first Question- Moving Docker images and container between machines?
You can create tar file of images or container and then move the tar file between your machines.
Check using docker ps -a,then based on your requirement use any one of the following:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
68d9619a7a91 ubuntu:14.04 "/bin/bash" 10 seconds ago Exited
For Container move - Use docker export and import:
$ docker export 68d9619a7a91 > ubuntu-container.tar
$ docker import - update < ubuntu-container.tar
For Image move -- Use docker save and load:
$ docker images
$ docker save -o image.tar
$ docker load < image.tar
2) Second question- Managing containers in production environment?
a) It is better to have your own private registry managing all the images that you need for your containers. Suppose you have a dedicated node as Docker registry where all your docker images will stay.Now you can push your changes or updates of the images to the registry and then accordingly pull this images from this registry to your machine that will run the containers from ths images.
b) Another great way of managing images/container across cluster and different cloud provider is to use a Kubernetes(open sourced by Google). Although we have not implemented Kubernetes,but just started looking into its documentation,and it looks very promising if you are using docker containers and cloud.