I must install minikube in an airgap environment.
Following the documentation I have installed the required specific Kubernetes version on a computer with network access and then copied the files from .minikube/cache to the airgap environment.
When creating the minikube cluster on the airgap environment minikube should detect the cached files and use those local images.
It doesn't...
It always tries to download a specifc image on the internet
unable to find image kicbase:0.0.36
It won't be able to download the image but will still carry on and try to find the others images on the internet although they are PRESENT in the cache folder.
I tried multiple minikube flags but none of them helped minikube find it's local images (image, cache, etc).
Using minikube 1.28
Driver : docker
Found the solution.
The minikube's offline install documentation fails to mention the kicbase image. It's not added to the cache folder on the internet PC.
I had to download it manualy on the internet PC :
docker pull gcr.io/k8s-minikube/kicbase:v0.0.36
Then add it to the offline environment.
On the offline environment I loaded the images (kicbase + the one in the cache folder) to the local docker images.
I don't know if this was really needed for the one presents in the cache folder but anyway I loaded everything and this is how I did it:
For the ones present in the cache folder (not .tgz but raw docker images)
cat <my_image_from_cache_folder> | docker load
Yes... You can pipe images to docker...
For the kicbase image which was .tgz I did:
docker load -i kicbase_v0.0.36.tgz
Note that a command exists to merge the docker env with the minikube one. I don't think I needed this one as it seems I used it after loading all the image. I'll still provide the command if someone needs it.
# Merge docker's env with the minikube's env for the current shell session only
eval $(minikube -p minikube docker-env)
# List images that minikube sees
minikube image ls --format table
Finally I started the minikube cluster once again. It found the kicbase image locally and was a happy minikube.
All next images were also taken from the cache (or from the loaded ones from docker ?).
The cluster is now working on the airgap environment !
In the end the real issue was that the documentation does not mention this kicbase image and that when minikube doesn't find it it will then try to download the cached images from the internet. It was like it wasn't detecting the images localy which sends people looking into the wrong direction.
Related
I had a corrupted OS of Ubuntu 16, and I wanted to backup all the docker things. Starting docker daemon outside fakeroot with --data-dir= didn't help, so I made a full backup of /var/lib/docker (with tar --xattrs --xattrs-include='*' --acls).
And in the fresh system (upgraded to Ubuntu 22.04), I extracted the tar, but found docker ps having empty output. I have the whole overlay2 filesystem and /var/lib/docker/image/overlay2/repositories.json, so there may be a way to extract the images and containers, but I couldn't find one.
Is there any way to restore them?
The backup worked actually, it was due to the docker installed during Ubuntu Server 22.04 installation process was ported by snap. After removing snap and installing a systemd version, the docker did recognize all the images and containers in overlayfs. Thanks for everyone!
For those who cannot start docker daemon for backup, you can try cp -a or tar --xattrs-include='*' --acls --selinux to copy the whole /var/lib/docker directory.
Probably not , as far as I have learned about docker it has stored your image in form different layers with different sha256 chunks.
Even when you try to transfer the images from one machine to another you would require online public/private repository to store and retrieve images or you have to zip the files from command line and then you can copy and paste it another location as single file.
Maybe from next time make sure you store all your important images to any of the online repository.
You can also refer different answers from this thread : How to copy Docker images from one host to another without using a repository
I use minikube with Docker driver on Linux. For a manual workflow I can enable registry addon in minikube, push there my images and refer to them in deployment config file simply as localhost:5000/anything. Then they are pulled to a minikube's environment by its Docker daemon and deployments successfully start in here. As a result I get all the base images saved only on my local device (as I build my images using my local Docker daemon) and minikube's environment gets cluttered only by images that are pulled by its Docker daemon.
Can I implement the same workflow when use Skaffold? By default Skaffold uses minikube's environment for both building images and running containers out of them, and also it duplicates (sometimes even triplicates) my images inside minikube (don't know why).
Skaffold builds directly to Minikube's Docker daemon as an optimization so as to avoid the additional retrieve-and-unpack required when pushing to a registry.
I believe your duplicates are like the following:
$ (eval $(minikube docker-env); docker images node-example)
REPOSITORY TAG IMAGE ID CREATED SIZE
node-example bb9830940d8803b9ad60dfe92d4abcbaf3eb8701c5672c785ee0189178d815bf bb9830940d88 3 days ago 92.9MB
node-example v1.17.1-38-g1c6517887 bb9830940d88 3 days ago 92.9MB
Although these images have different tags, those tags are just pointers to the same Image ID so there is a single image being retained.
Skaffold normally cleans up left-over images from previous runs. So you shouldn't see the minikube daemon's space continuously growing.
An aside: even if those Image IDs were different, an image is made up of multiple layers, and those layers are shared across the images. So Docker's reported image sizes may not actually match the actual disk space consumed.
I'm new to docker.Most of the tutorials on docker cover the same thing.I'm afraid I'm just ending up with piles of questions,and no answers really. I've come here after my fair share of Googling, kindly help me out with these basic questions.
When we install a docker,where it gets installed? Is it in our computer in local or does it happen in cloud?
Where does containers get pulled into?I there a way I can see what is inside the container?(I'm using Ubuntu 18.04)
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
Looks like you are confused after reading to many documents. Let me try to put this in simple words. Hope this will help.
When we install a docker,where it gets installed? Is it in our
computer in local or does it happen in cloud?
We install the docker on VM be it you on-prem VM or cloud. You can install the docker on your laptop as well.
Where does containers get pulled into?I there a way I can see what is
inside the container?(I'm using Ubuntu 18.04)
This question can be treated as lack of terminology awareness. We don't pull the container. We pull the image and run the container using that.
Quick terminology summary
Container-> Containers allow you to easily package an application's code, configurations, and dependencies into a template called an image.
Dockerfile-> Here you mention your commands or infrastructure blueprint.
Image -> Image gets derived from Dockerfile. You use image to create and run the container.
Yes, you can log inside the container. Use below command
docker exec -it <container-id> /bin/bash
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
You can pull the opensource image from Docker-hub
When you clone the git project which is docerized, you can look for Dockerfile in that project and create the your own image by build it.
docker build -t <youimagenae:tag> .
When you build or pull the image it get store in to your local.
user docker images command
Refer the below cheat-sheet for more commands to play with docker.
The docker daemon gets installed on your local machine and everything you do with the docker cli gets executed on your local machine and containers.
(not sure about the 1st part of your question). You can easily access your docker containers by docker exec -it <container name> /bin/bash for that you will need to have the container running. Check running containers with docker ps
(again I do not entirely understand your question) The images that you pull get stored on your local machine as well. You can see all the images present on your machine with docker images
Let me know if it was helpful and if you need any futher information.
I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.
I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.