Can Docker CLI, Podman and other similar tools have shared local storage for images? - docker

I recently started using podman and realized that images pulled via docker doesn't become available for use to podman and vice-versa. For example:-
If I pull the image using docker CLI, as shown below
docker pull registry.access.redhat.com/ubi7-minimal
and If I want to use the same image with podman or buildah, turns out I cannot
[riprasad#localhost ~]$ podman inspect registry.access.redhat.com/ubi7-minimal
Error: error getting image "registry.access.redhat.com/ubi7-minimal": unable to find 'registry.access.redhat.com/ubi7-minimal' in local storage: no such image
I understand that this is because both podman and docker uses a different storage location and hence the image pulled down via docker doesn't becomes available for use with podman and vice-versa.
[riprasad#localhost ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/ubi7-minimal latest fc8736ea8c5b 5 weeks ago 81.5MB
[riprasad#localhost ~]$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
Is there a way to mitigate this issue, and somehow make docker and podman work inter-changeably on the very same image, irrespective of whether it has been pulled down via docker or podman ??

Docker and Podman do not sure the same storage. They can not, because Docker controls locking to its storage within the daemon. While Podman, Buildah, CRI-O, Skopeo all can share content, because they use the file system.
Podman and the other tools can work with the docker-daemon storage indirectly, via the "docker-daemon" transport.
Something like:
podman run docker-daemon:alpine echo hello
Should work.
Note, that podman is pulling the image out of the docker daemon and is storing the image in containers/storage, and then running the container, it is not using the Docker storage directly.
You can also do
podman push myimage docker-daemon:myimage
To copy an image from containers/storage into the docker daemon.

Adding to #rhatdan's post
podman run docker://alpine echo hello
This worked for me.
For more details: Here->

Related

Docker pull hello-world showing successful create but when using docker ps or docker ps -a not shoing images

I used docker ps/docker ps -a/docker ps -n 1 all not showing my first image.
But it after I using docker pull hello-world it saying it installed successfully
docker pull pulls an image (and all the layers that make it up) to your local machine, but doesn't run anything.
docker ps lists containers on your system.
Once you run that container (using docker run hello-world), you'll see it in dokcer ps.
To view the image you pulled, you could use docker images.
As you find from the previous answer docker pull will download the image (mostly from the docker hub) and when trying to pull next time, it finds the image already in your local machine. To see all the images you have locally, use docker image ls.

Docker Image history without using docker history command

I have a docker image. I want to analyze the docker image history, for this I can use docker image history command in the docker installed environment.
But when am working in a Openshift cluster, I may not have the access to the docker command here. So here I want get the docker history command result for the given image.
So basically I have a docker image and I don't have docker installed there. In this case how can we get the history of that docker image?
Can anyone please help me on this?
You can get the registry info either via curl or skopeo inspect. But the rest of the metadata is stored inside the image itself so you do have to download at least the final layer.

How to access the locally built docker-image on the docker-swarm manager?

While trying to build a service on docker-machine i got an error of "image doesn't exist" on that docker-machine manager node. As I checked the docker images command on the manager node, no image was there as expected. But on the root docker side I have those images. I want to access these images on the manager node. I've read few articles where it was mentioned that, maybe I've to upload that image on the docker hub then pull it from that hub. But I want to access it locally. Is there any way to do this as I'm newbie to docker.
This is the command what I tried on my manager machine:
docker#manager:~$ docker service create --name "api-client" -p 4200:4200 api_client
This is my docker images output:
REPOSITORY TAG IMAGE ID CREATED SIZE
api_client latest 097b19c4deb8 27 hours ago 1.15GB
But on my docker#manager terminal, my docker image folder is empty.
The problem is that there is no repository to hold the image. The repository needs to be pulled from to a repository to each node in the Swarm before it can execute. In general you need to do the following:
Setup a repository, if you want a local repository there is a guide here, but it will be some hassle to get it up and running i and "insecure http" version. An easier way is to get yourself a free docker hub account and put your image there.
Tag your local image with the repository name. Howto is shown in the guide above.
docker tag <local image> <repository>/<image:tag>
Login to the repository (if in cloud) and push your image to the repository
docker login
docker push <repository>/<image>:<tag>
To run the image (your command)
docker service create --name "api-client" -p 4200:4200 <repository>/<image>:<tag>
Your can also try to pull an image into the local cache of a node using
docker pull <repository>/<image>:<tag>

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

What are the best practices to manage and move Docker containers?

We are evaluating Docker to use for our application,so really like to know the following questions:
What are the best practices to move docker images and container between different machine?
Also how to manage containers and images in production environment across different regions?
First of all Docker architecture has a push pull mechanism using Registry(which may be private or public(like docker Hub).
1) Answer to your first Question- Moving Docker images and container between machines?
You can create tar file of images or container and then move the tar file between your machines.
Check using docker ps -a,then based on your requirement use any one of the following:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
68d9619a7a91 ubuntu:14.04 "/bin/bash" 10 seconds ago Exited
For Container move - Use docker export and import:
$ docker export 68d9619a7a91 > ubuntu-container.tar
$ docker import - update < ubuntu-container.tar
For Image move -- Use docker save and load:
$ docker images
$ docker save -o image.tar
$ docker load < image.tar
2) Second question- Managing containers in production environment?
a) It is better to have your own private registry managing all the images that you need for your containers. Suppose you have a dedicated node as Docker registry where all your docker images will stay.Now you can push your changes or updates of the images to the registry and then accordingly pull this images from this registry to your machine that will run the containers from ths images.
b) Another great way of managing images/container across cluster and different cloud provider is to use a Kubernetes(open sourced by Google). Although we have not implemented Kubernetes,but just started looking into its documentation,and it looks very promising if you are using docker containers and cloud.

Resources