I have boot2docker running on OS X 10.10.
I used docker to install conceptnet5, a 50GB big database that takes days to download from my location.
Now, somebody requested an Ubuntu VM with conceptnet5 running on it in a docker container from me.
So, to avoid downloading everything again, I wondered if there is a way to transfer conceptnet5's container from boot2docker to my newly created ubuntu vm.
Here is the docker container I'm using.
You could also work with save and load command.
The save command will produces a tarred repository of the image. It will contains all parent layers, and all tags.
$ docker save myimage -o myimage.tar
# Or even better, gzip it using unix pipes
$ docker save myimage | gzip > myimage.tar.gz
Now you have a tarball with all the layer and metadata that you can pass around, offline, with usb keys & stuff.
To load it back, it's the load command. The load command will work with the following compression algorithm : gzip, bzip2 and xz.
$ docker load -i myimage.tar.gz
# or with pipes
$ docker load < myimage.tar.gz
It's a little bit easier than running a private registry, but both works well.
You can setup a private docker registry and then push the image there. Hopefully this private registry is on your local network so you should get much higher throughput. Then you can pull down the pushed image in your new Ubuntu VM.
Related
I must install minikube in an airgap environment.
Following the documentation I have installed the required specific Kubernetes version on a computer with network access and then copied the files from .minikube/cache to the airgap environment.
When creating the minikube cluster on the airgap environment minikube should detect the cached files and use those local images.
It doesn't...
It always tries to download a specifc image on the internet
unable to find image kicbase:0.0.36
It won't be able to download the image but will still carry on and try to find the others images on the internet although they are PRESENT in the cache folder.
I tried multiple minikube flags but none of them helped minikube find it's local images (image, cache, etc).
Using minikube 1.28
Driver : docker
Found the solution.
The minikube's offline install documentation fails to mention the kicbase image. It's not added to the cache folder on the internet PC.
I had to download it manualy on the internet PC :
docker pull gcr.io/k8s-minikube/kicbase:v0.0.36
Then add it to the offline environment.
On the offline environment I loaded the images (kicbase + the one in the cache folder) to the local docker images.
I don't know if this was really needed for the one presents in the cache folder but anyway I loaded everything and this is how I did it:
For the ones present in the cache folder (not .tgz but raw docker images)
cat <my_image_from_cache_folder> | docker load
Yes... You can pipe images to docker...
For the kicbase image which was .tgz I did:
docker load -i kicbase_v0.0.36.tgz
Note that a command exists to merge the docker env with the minikube one. I don't think I needed this one as it seems I used it after loading all the image. I'll still provide the command if someone needs it.
# Merge docker's env with the minikube's env for the current shell session only
eval $(minikube -p minikube docker-env)
# List images that minikube sees
minikube image ls --format table
Finally I started the minikube cluster once again. It found the kicbase image locally and was a happy minikube.
All next images were also taken from the cache (or from the loaded ones from docker ?).
The cluster is now working on the airgap environment !
In the end the real issue was that the documentation does not mention this kicbase image and that when minikube doesn't find it it will then try to download the cached images from the internet. It was like it wasn't detecting the images localy which sends people looking into the wrong direction.
I had a corrupted OS of Ubuntu 16, and I wanted to backup all the docker things. Starting docker daemon outside fakeroot with --data-dir= didn't help, so I made a full backup of /var/lib/docker (with tar --xattrs --xattrs-include='*' --acls).
And in the fresh system (upgraded to Ubuntu 22.04), I extracted the tar, but found docker ps having empty output. I have the whole overlay2 filesystem and /var/lib/docker/image/overlay2/repositories.json, so there may be a way to extract the images and containers, but I couldn't find one.
Is there any way to restore them?
The backup worked actually, it was due to the docker installed during Ubuntu Server 22.04 installation process was ported by snap. After removing snap and installing a systemd version, the docker did recognize all the images and containers in overlayfs. Thanks for everyone!
For those who cannot start docker daemon for backup, you can try cp -a or tar --xattrs-include='*' --acls --selinux to copy the whole /var/lib/docker directory.
Probably not , as far as I have learned about docker it has stored your image in form different layers with different sha256 chunks.
Even when you try to transfer the images from one machine to another you would require online public/private repository to store and retrieve images or you have to zip the files from command line and then you can copy and paste it another location as single file.
Maybe from next time make sure you store all your important images to any of the online repository.
You can also refer different answers from this thread : How to copy Docker images from one host to another without using a repository
I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.
I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.
I want to download some images for a computer that has not internet.
My computer that have internet has NO DOCKER installed (old kernel) so it is not possible to use docker command to pull, save and export it to the other machine.
I'm looking for a way to download a docker image (like via wget, ...) and use it on my computer without Internet.
Yes that's possible. Docker has the features save and load.
Run this command on your machine with the image you want to copy to the other computer:
docker save myimage > myimage.tar
To load the image again run:
docker load < myimage.tar
If you don't have access to a machine supporting docker in any way what you can do is create a repository on quay.io with a dockerfile like
FROM myimage
...
quay actually allows to download images from the web panel whereas docker hub/store does not afaik.