I'm using Tensorflow on windows 10 with docker (yes, I know Windows 10 isn't supported yet). It performs ok, but only looks like I am only accessing just one of my cpu cores (I have 8). Tensorflow has the ability to assign ops to different devices, so I'd like to be able to get access to all 8. In VirtualBox when I view the settings it only says there is 1 cpu out of the 8 that is configured for the machine. I tried editing the machine to set it to more, but that lead to all sorts of weirdness.
Does anyone know the right way to either create or restart a docker machine to have 8 CPUs? I'm using the docker quickstart container app.
Cheers!!
First you need to ensure you have enabled Virtualization for your machine. You have to do that in the BIOS of your computer.
The link below has a nice video on how to do that, but there are others as well if you google it:
https://www.youtube.com/watch?v=mFJYpT7L5ag
Then you have to stop the docker machine (i.e. the VirtualBox vm) and change the CPU configuration in VirtualBox.
To list the name of your docker machine (it is usually default) run:
docker-machine ls
Then stop the docker machine:
docker-machine stop <machine name>
Next open VirtualBox UI and change the number of CPUs:
Select the docker virtual machine (should be marked as Powered off)
Click Settings->Systems->Processors
Change the number of CPUs
Click OK to save your changes
Restart the docker machine:
docker-machine start <machine name>
Finally you can use the CPU constraint options available for docker run command to restrict CPU usage for your containers if desired.
For example the following command restrict container to use only 3 CPUs:
docker run -ti --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash
More details available in the docker run reference document here.
I just create the machine with all cpus
docker-machine create -d virtualbox --virtualbox-cpu-count=-1 dev
-1 means use all available cpus.
Related
I am trying to understand how Minikube is run on Windows, for the following setup. There are several related questions below, which I hope will help me understand holistically how this works.
Using minikube profile list, I get the following output.
C:\>minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes |
|----------|-----------|---------|--------------|------|---------|---------|-------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.20.7 | Running | 1 |
|----------|-----------|---------|--------------|------|---------|---------|-------|
Is this minikube a container running using my local installation of Docker Desktop? Thus whether it runs on WSL2 or Virtualbox is dependent on how I get my Docker Desktop run?
If I minikube ssh, I get to interact with docker within. From the output below, does it mean that each of the minikube kubernetes component is run as an individual container? Is this an example of docker-in-docker?
C:\>minikube ssh
Last login: Wed Nov 10 14:07:23 2021 from 192.168.49.1
docker#minikube:~$ docker ps --format '{{.Names}}'
k8s_storage-provisioner_storage-provisioner_kube-system_b7c766e9-48fe-45dd-a929-d6fd4b6fcf8b_0
k8s_POD_storage-provisioner_kube-system_b7c766e9-48fe-45dd-a929-d6fd4b6fcf8b_0
k8s_kube-proxy_kube-proxy-4r5hz_kube-system_71dc0877-5a47-4b2c-a106-ee41e5f6a142_0
k8s_coredns_coredns-74ff55c5b-pl7tb_kube-system_6cf31402-c3b4-4d86-8963-8a53e36b7878_0
k8s_POD_kube-proxy-4r5hz_kube-system_71dc0877-5a47-4b2c-a106-ee41e5f6a142_0
k8s_POD_coredns-74ff55c5b-pl7tb_kube-system_6cf31402-c3b4-4d86-8963-8a53e36b7878_0
k8s_kube-scheduler_kube-scheduler-minikube_kube-system_82ed17c7f4a56a29330619386941d47e_0
k8s_kube-apiserver_kube-apiserver-minikube_kube-system_01d7e312da0f9c4176daa8464d4d1a50_0
k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_c7b8fa13668654de8887eea36ddd7b5b_0
k8s_etcd_etcd-minikube_kube-system_c31fe6a5afdd142cf3450ac972274b36_0
k8s_POD_kube-scheduler-minikube_kube-system_82ed17c7f4a56a29330619386941d47e_0
k8s_POD_kube-controller-manager-minikube_kube-system_c7b8fa13668654de8887eea36ddd7b5b_0
k8s_POD_kube-apiserver-minikube_kube-system_01d7e312da0f9c4176daa8464d4d1a50_0
k8s_POD_etcd-minikube_kube-system_c31fe6a5afdd142cf3450ac972274b36_0
docker#minikube:~$
Is a minikube container using the local installation of Docker Desktop?
Minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes is a single command away: minikube start.
What you’ll need?
2 CPUs or more
2GB of free memory
20GB of free disk space
Internet connection
Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware
I want to point out the last sentence, you can choose from multiple containers or virtual machine managers. Docker is one of the options you could have chosen, and based on your post,your current deployment is using Docker as hypervisor.
Is it running on WSL2 or Virtualbox?
Here is some information about WSL2 and Virtualbox, but the information provided about your environment is not enough to determine if your deployment is in Virtualbox or WSL2.
Virtualbox hardware virtualization option is to allow the virtualization capabilities provided by the processor. This does not help with nested virtualization. You can run in Docker in Virtualbox as long as there is no hypervisor running inside. That is the case when we run Docker on Linux systems in Virtualbox. With Windows server, they run hyperv as well on top of which they run Windows server where Docker runs. That's why nested virtualization is needed here.
With Docker Desktop running on WSL 2, users can leverage Linux workspaces and avoid having to maintain both Linux and Windows build scripts. In addition, WSL 2 provides improvements to file system sharing, boot time, and allows access to some cool new features for Docker Desktop users.
Before you install the Docker Desktop WSL 2 backend, you must complete the following steps:
Install Windows 10, version 1903 or higher or Windows 11.
Enable WSL 2 feature on Windows.
Download and install the Linux kernel update package.
Does each of the minikube kubernetes components run as an individual container?
Minikube is a utility you can use to run Kubernetes on your local machine. It creates a single node cluster contained in a virtual machine (VM). This cluster lets you demo Kubernetes operations without requiring the time and resource-consuming installation of full-blown K8s.
Here are the basic concepts of kubernetes.
Deployment—configured and operational resources. Deployments are the overall processes that enable you to orchestrate your resources.
ReplicaSet—sets of pods that provide the resources for your services.
Pod—a unit that contains one or more containers along with attached storage resources, and configuration definitions. Pods are grouped together in ReplicaSets and all pods in a set run the same container images.
Node cluster—control plane and worker nodes that each contain one or more pods. The workers run your workloads and the control plane orchestrates the workers together. This is what Minikube creates.
Node processes—the various components that you use to connect and manage Kubernetes. Control plane processes include API servers, ectd, Scheduler, kube-controller-manager, and cloud-controller-manager. Worker processes include kubelet, kube-proxy, and your container runtime.
Container—the image you create to hold your applications.
When I run my docker container using Docker Desktop for Windows I am able to connect to it using
docker run -p 5051:5000 my_app
http://0.0.0.0:5051
However when I open another terminal and do this
minikube docker-env | Invoke-Expression
and build and run the same container using the same run command as above
I cannot connect to the running instance.
Should I be running and testing the containers using Docker Desktop, then using minikube to store the images only (for Kubernetes)? Or can you run them and test them as well through minikube?
That's because on your second attempt, the container is not running on the host but on the minikube VM. You'll be able to access it using the minikube VM IP.
To get the minikube ip you can run minikube ip
Why ?
Invoking minikube docker-env sets all the docker env variable on your host to match the minikube environment. This means that when you run a container after that, it is run with the docker daemon on the minikube VM.
I asked you if there are any specific reasons to use Docker Desktop and Minikube together on a single machine as these are two competitive solutions which basically enable you to perform similar tasks and achieve same goals.
This article nicely explains differences between these two tools.
Docker-for-windows uses Type-1 hypervisor, such as Hyper-V, which are
better compared to Type-2 hypervisors, such as VirtualBox, while
Minikube supports both hypervisors. Unfortunately, there are a couple
of limitations in which technology you are using, since you cannot
have Type-1 or Type-2 hypervisors running at the same time on your
machine
If you use Docker Desktop and Minikube at the same time I assume you're using Type-1 hypervisor, such as mentioned Hyper-V, but keep in mind that even if they use the same hypervisor, both tools create their own instances of virtual machine. Basically you are not supposed to use those two tools together expecting that they will work as a kind of hybrid that lets you manage single container environment.
First check what hypervisor you are using exactly. If you're using Hyper-V, simple Get-VM command in Powershell (more details in this article) should tell you what you currently have.
#mario no, I didn't know minikube had a docker daemon until recently
which is why I have both
Yes, Minikube has built in docker environment (in fact it sets everything up, but yes, it also sets up container runtime) so basically you don't need to install docker additionally, and as #Marc ABOUCHACRA already suggested in his answer, Minikube runs the whole environment (single node k8s cluster with docker runtime) on a separate VM. Linux version has an option --vm-driver=none which allows you to use your host container runtime and set-up k8s components on it, but this is not the case with Windows version - here you can only use one of two currently supported hypervisors: Hyper-V or VirtualBox (ref).
I wouldn't say that Docker Destkop runs everything on your host. It also uses Type-1 hypervisor to run the container runtime environment. Please check the Get-VM command on your computer and it should be clear what VMs you have and created by which tool.
I want to use minikube on Windows 10. I have installed VirtualBox and want to use it as the virtual machine for minikube. Also I installed Docker for windows. But during installation Docker forced to use Hyper-V as default. But that means I can no longer use VirtualBox to run minikube! Not sure what am I missing here.
I have used minikube on Mac and there it was much simpler: simply open VirtualBox and then run command on command line: minikube start . However in Windows 10 it seems much more complicated.
Just to make things clear: Docker requires Hyper-V to be turned on, and Virtualbox requires Hyper-V to be turned off. The reason is they use different virtualization technologies, to be exact - type 1 and type 2 hypervisors:
Type 1 hypervisor: hypervisors run directly on the system hardware – A
“bare metal” embedded hypervisor, Type 2 hypervisor: hypervisors run
on a host operating system that provides virtualization services, such
as I/O device support and memory management.
I've found that there are few approaches to this issue. One of them is adding another boot option and rebooting every time you needed to switch between hypervisors, but it seems that this method is as good as manually turning off Hyper-V, restarting and then using your minikube in VirtualBox. This is probably not the desired state.
So as you can't use them at once, you will have to use a tool that was introduced by Docker for older Windows systems. This is because Docker Toolbox is not using Hyper-V.
Please treat this solution as a workaround, and even Docker does not recommend using Docker toolbox if you can use Docker. Also, you could achieve the same results with minikube running on Hyper-V.
0) Uninstall Docker, turn off Hyper-V, delete all traces of minikube, uninstall VirtualBox (if you tried to run it previously.)
1) Install [Docker Toolbox] - choose full installation2
2) Install Virtualbox, run docker run hello-world inside of Docker Quickstart Terminal and verify if everything is working correctly.
3) Install minikube for Windows (I used chocolatey)
4) Run minikube start.
I've tested this steps, and I was able to run Docker containers in the Docker toolbox in the meantime initializing a Kubernetes cluster in minikube.
Lets say I am running a multiprocessing service inside a docker container spawning multiple processes, would docker use all/multiple cores/CPUs of the host or just one?
As Charles mentions, by default all can be used, or you can limit it per container using the --cpuset-cpus parameter.
docker run --cpuset-cpus="0-2" myapp:latest
That would restrict the container to 3 CPU's (0, 1, and 2). See the docker run docs for more details.
The preferred way to limit CPU usage of containers is with a fractional limit on CPUs:
docker run --cpus 2.5 myapp:latest
That would limit your container to 2.5 cores on the host.
Lastly, if you run docker inside of a VM, including Docker for Mac, Docker for Windows, and docker-machine, those VM's will have a CPU limit separate from your laptop itself. Docker runs inside of that VM and will use all the resources given to the VM itself. E.g. with Docker for Mac you have the following menu:
Maybe your host VM has only one core by default. Therefore you should increase your VM cpu-count first and then use --cpuset-cpus option to increase your docker cores. You can remove docker default VM using the following command then you can create another VM with optional cpu-count and memory size.:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-cpu-count=8 --virtualbox-memory=4096 --virtualbox-disk-size=50000 default
After this step you can specify number of cores before running your image. this command will use 4 cores of total 8 cores.
docker run -it --cpuset-cpus="0-3" your_image_name
Then you can check number of available core in your image using this command:
nproc
So I have read this in many places that docker is faster and more efficient because it uses containers over VMs but when I downloaded docker on my mac I realized that it uses virtual box to run the containers. I believe on a linux machine docker doesn't need virtual box and can run on Linux Kernel. Is this correct ?
Back to original question. Is docker still faster/efficient because it uses a single VM to run multiple containers as opposed to Vargrant's new VM for every environment ?
I believe on a linux machine docker doesn't need virtual box and can run on Linux Kernel. Is this correct ?
Yes, hence the need for a VirtualBox Linux VM (using a TinyCore distribution)
Is docker still faster/efficient because it uses a single VM to run multiple containers as opposed to Vargrant's new VM for every environment ?
Yes, because of the lack of Hypervisor simulating the hardware and OS: here you can launch multiple containers all using directly the kernel (through direct system calls), without having to simulate an OS.
(Note: May 2018, gVisor is another option: a container, simulating an OS!)
See more at "How is Docker different from a normal virtual machine?".
Of course, remember that Vagrant can use a docker provider.
That means you don't have to always provision a full-fledged VM with Vagrant, but rather images and containers.
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.image = "foo/bar"
end
end
See Vagrant docker provisioner.