I am fairly new to docker and kubernetes and going through kubernetes docs it says,
When using a single VM for Kubernetes, it’s useful to reuse Minikube’s built-in Docker daemon. Reusing the built-in daemon means you don’t have to build a Docker registry on your host machine and push the image into it. Instead, you can build inside the same Docker daemon as Minikube, which speeds up local experiments.
So, my understanding is there are two instances are running in my local machine, one in macOS and the other one in the VM.
Suppose I created a image using the docker instance on my macOS, and then I want to use it on Kubernetes then,
Question 1: Do I strictly need to create a local registry and then pull it from within Kubernetes cluster ?
It further says,
To work with the Docker daemon on your Mac/Linux host, use the docker-env command in your shell: eval $(minikube docker-env)
Running this creates a few environment variables in the current shell.
Question 2: Will this be able to pull images that I build from within the docker in my macOS without creating the local registry.
Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop.
So it will create a k8s setup on a VM running on your macOS.
eval $(minikube docker-env)
this command on you macOS will help you to switch context to docker, so that you can run docker commands from your macOS.
Question 1: Do I strictly need to create a local registry and then pull it from within Kubernetes cluster ?
No you don't need to explicitly create a local registry as everthing runs on a single VM in minikube.
Question 2: Will this be able to pull images that I build from within the docker in my macOS without creating the local registry?
By switching docker env context on your host machine you can able to pull the images you don't need create a registry for that. Remember your macOS is not the part of you k8s cluster. Your k8s cluster is running on the single VM created by minikube.
Related
I am working on a web application with all the infrastructure based on Kubernetes. In my local environment, I am using Skaffold.
I have two computers (Desktop and Laptop) with 8Gb of RAM each. By starting minikube (virtualbox driver) and skaffold dev the Deskop is freezing.
So I decided to use the Laptop for coding and the Desktop for running minikube and everything related.
I successfully managed to set up kubeconfig on the laptop to have a context with the minikube server.
Actually, The issue is skaffold.
When I run skaffold dev, it fails because minikube of the Deskop doesn't see the images build by skaffold on my laptop. kubectl get po returns ImagePullBackOff.
That is because skaffold uses the local docker to build the image.
The question is how to make skaffold use the docker install in my Desktop?
I changed the docker context of my laptop so that it's linked to the Desktop context but it's still not working, skaffold is still using the default docker context installed in my laptop.
How to make the images build by Skaffold being available on my Desktop?
Is it possible for Skaffold to use a remote docker context? If yes, how?
Minikube uses its own Docker installation to power its cluster. This daemon runs in Minikube's VM (or container, if using the docker driver) and is completely independent from the host's Docker daemon (your Desktop). You can access to Minikube's daemon by setting the environment returned by minikube docker-env.
While playing around with Docker and orchestration (kubernetes) I had to install and use minikube to create a simple sandbox environment. At the beginning I thought that minikube installs some kind of VM and run the "minified" kubernetes environment inside the same, however, after the installation listing my local Docker running containers I found minikube running as a container!!
Why minikube itself run as a Docker container? and how can it runs other containers?
Experimental Docker support looks to have been added in minikube 1.7.0, and started becoming the default runtime in minikube 1.9.0. As I'm writing this, current is 1.15.1.
The minikube documentation on the "docker" driver notes, particularly on a native-Linux host, there is not an intermediate virtual machine: if you can run Kubernetes in a container, it can use the entire host system's resources without special configuration or partitioning. The previous minikube-on-VirtualBox installation required preallocating memory and disk to the VM, and it was easy to get those settings wrong. Even on non-Linux hosts, if you're running Docker Desktop, sharing its hidden Linux VM can improve resource utilization, and you don't need to decide to allocate exactly 2 GB RAM to Docker Desktop and exactly 4 GB to the minikube VM.
For a long time it's been possible, but discouraged, to run a separate Docker daemon inside a Docker container; similarly, it's possible, but usually discouraged, to run a multi-process init system in a container. If you do both of these things then you can have the core Kubernetes components (etcd, apiserver, kubelet, ...) inside a single container pretending to be a Kubernetes node. It also helps here that Kubernetes already knows how to pull Docker images, which minimizes some of the confusing issues with running Docker in Docker.
Question 1: I am new to docker swarm, I created a docker swarm cluster on my local machine and SSH in to it. To my surprise docker-compose was NOT installed inside the manager node. Is that normal ? Is there any workaround to get the docker compose up and running on swarm manager node ?
Question 2: how do I manage to get all my code inside manager node. Let’s say I have my source code on a director. If I want to move that inside my docker swarm manager node. How can I do that ?
It is common for docker-compose to not be installed on servers compared to docker-desktop-clients which come bundled with docker-compose and other tools.
You have to install it to use it on your local machine. https://docs.docker.com/compose/install/
Although you can use your installation of docker-compose to work against the docker-daemon on your local machine by setting DOCKER_HOST
https://docs.docker.com/engine/reference/commandline/cli/#environment-variables
You can copy your source-code onto your local-machine via scp https://linuxize.com/post/how-to-use-scp-command-to-securely-transfer-files/
But you would rather build images and deploy onto your local-machine.
When I run my docker container using Docker Desktop for Windows I am able to connect to it using
docker run -p 5051:5000 my_app
http://0.0.0.0:5051
However when I open another terminal and do this
minikube docker-env | Invoke-Expression
and build and run the same container using the same run command as above
I cannot connect to the running instance.
Should I be running and testing the containers using Docker Desktop, then using minikube to store the images only (for Kubernetes)? Or can you run them and test them as well through minikube?
That's because on your second attempt, the container is not running on the host but on the minikube VM. You'll be able to access it using the minikube VM IP.
To get the minikube ip you can run minikube ip
Why ?
Invoking minikube docker-env sets all the docker env variable on your host to match the minikube environment. This means that when you run a container after that, it is run with the docker daemon on the minikube VM.
I asked you if there are any specific reasons to use Docker Desktop and Minikube together on a single machine as these are two competitive solutions which basically enable you to perform similar tasks and achieve same goals.
This article nicely explains differences between these two tools.
Docker-for-windows uses Type-1 hypervisor, such as Hyper-V, which are
better compared to Type-2 hypervisors, such as VirtualBox, while
Minikube supports both hypervisors. Unfortunately, there are a couple
of limitations in which technology you are using, since you cannot
have Type-1 or Type-2 hypervisors running at the same time on your
machine
If you use Docker Desktop and Minikube at the same time I assume you're using Type-1 hypervisor, such as mentioned Hyper-V, but keep in mind that even if they use the same hypervisor, both tools create their own instances of virtual machine. Basically you are not supposed to use those two tools together expecting that they will work as a kind of hybrid that lets you manage single container environment.
First check what hypervisor you are using exactly. If you're using Hyper-V, simple Get-VM command in Powershell (more details in this article) should tell you what you currently have.
#mario no, I didn't know minikube had a docker daemon until recently
which is why I have both
Yes, Minikube has built in docker environment (in fact it sets everything up, but yes, it also sets up container runtime) so basically you don't need to install docker additionally, and as #Marc ABOUCHACRA already suggested in his answer, Minikube runs the whole environment (single node k8s cluster with docker runtime) on a separate VM. Linux version has an option --vm-driver=none which allows you to use your host container runtime and set-up k8s components on it, but this is not the case with Windows version - here you can only use one of two currently supported hypervisors: Hyper-V or VirtualBox (ref).
I wouldn't say that Docker Destkop runs everything on your host. It also uses Type-1 hypervisor to run the container runtime environment. Please check the Get-VM command on your computer and it should be clear what VMs you have and created by which tool.
I deploy a docker container on compute engine.
I want to re-deploy this docker container after I build a new docker image with same image name and tag, like webapp:latest
For now, I re-deploy docker container by restart compute engine instance.
I think it's not correct.
What is the correct way for re-deploying a docker container?
When you deploy Docker images on Google Compute Engine virtual machine instances there is some limitation as you can only deploy one container for each VM instance and you can only use Container-Optimized OS images with this deployment method.
I believe the best workaround is to uncheckbox the container option in your instance details to do not deploy a container to the VM instance by using a container-optimized OS image. This option is useful if you want to deploy a single container on the VM.
Instead, install docker in your VM outside the GCP. Also, Consider Kubernetes Engine if you need to deploy multiple containers per VM instance.