I installed Docker (version 18.06.1-ce-mac73) on a MacBook Pro with macOS High Sierra (version 10.13.6). When I go to Preferences -> Advanced in Docker GUI, I see that memory limit is set to 4GB and CPUs are set to 2. However, running docker info from the terminal shows that total memory is 995.6MiB and CPUs is 1.
It seems that the 995.6MiB limit is enforced, because I'm trying to build a project inside a container and by checking docker stats I see that it runs out of memory when 995.6MiB limit is reached.
Shouldn't docker info match the GUI configuration?
Your docker command is pointed at a VirtualBox VM, probably one installed via Docker Machine or Docker Toolbox. The Docker for Mac "whale" app uses a different virtualization system. (Check: if you echo $DOCKER_HOST, does it say something like tcp://192.168.99.100:2376?)
You can "deactivate" the Docker VM in your shell by running
eval $(docker-machine env -u)
docker info
and having done that can docker-machine rm the VM.
Related
I run a docker build on my unix build machine and therefore I need more than 2GB Memory (default value of docker engine). I got the build working on my Mac with the docker Desktop UI in the settings as you can see it in the image.
How is this possible in Unix?
If you want to increase the allocated memory and CPU while running a container you can try this:
docker run -it --cpus="2" --memory="4096m" ubuntu /bin/bash
You can also do the same while deploying services using docker-compose
More infos:
https://docs.docker.com/config/containers/resource_constraints/
How to specify Memory & CPU limit in docker compose version 3
I want to use minikube on Windows 10. I have installed VirtualBox and want to use it as the virtual machine for minikube. Also I installed Docker for windows. But during installation Docker forced to use Hyper-V as default. But that means I can no longer use VirtualBox to run minikube! Not sure what am I missing here.
I have used minikube on Mac and there it was much simpler: simply open VirtualBox and then run command on command line: minikube start . However in Windows 10 it seems much more complicated.
Just to make things clear: Docker requires Hyper-V to be turned on, and Virtualbox requires Hyper-V to be turned off. The reason is they use different virtualization technologies, to be exact - type 1 and type 2 hypervisors:
Type 1 hypervisor: hypervisors run directly on the system hardware – A
“bare metal” embedded hypervisor, Type 2 hypervisor: hypervisors run
on a host operating system that provides virtualization services, such
as I/O device support and memory management.
I've found that there are few approaches to this issue. One of them is adding another boot option and rebooting every time you needed to switch between hypervisors, but it seems that this method is as good as manually turning off Hyper-V, restarting and then using your minikube in VirtualBox. This is probably not the desired state.
So as you can't use them at once, you will have to use a tool that was introduced by Docker for older Windows systems. This is because Docker Toolbox is not using Hyper-V.
Please treat this solution as a workaround, and even Docker does not recommend using Docker toolbox if you can use Docker. Also, you could achieve the same results with minikube running on Hyper-V.
0) Uninstall Docker, turn off Hyper-V, delete all traces of minikube, uninstall VirtualBox (if you tried to run it previously.)
1) Install [Docker Toolbox] - choose full installation2
2) Install Virtualbox, run docker run hello-world inside of Docker Quickstart Terminal and verify if everything is working correctly.
3) Install minikube for Windows (I used chocolatey)
4) Run minikube start.
I've tested this steps, and I was able to run Docker containers in the Docker toolbox in the meantime initializing a Kubernetes cluster in minikube.
I have Windows 10 with Windows Subsystem for Linux installed (Bash on Ubuntu on Windows), have Docker installed on Windows, and a Docker client running on the linux subsystem per this walkthrough. All works well, however, when I want to be able to access a volume on my default mount "/mnt/c/../". I am using the mount flags at docker launch and have tried both:
docker run -v $PWD:/mount
docker run --mount type=bind,source="$(PWD)",target=/mount
and most variations shown here.
I have reason to think this is a permissions issue. When launching from the linux subsystem, there is always one empty folder from the original source directory. When launching from windows powershell, everything is fine. The only difference between the two would be the docker client being used.
I have shared C in the docker host settings in windows, however, do I need to do something similar for the client inside of windows subsystem for linux?
Versions:
Docker client: Docker version 18.03.0-ce, build 0520e24
Docker host: Docker version 18.03.0-ce-win59 (16762)
I had the same issue with the same set-up. After a lot of trial and error and googling, here is what resolved the issue:
Change Windows password to not include special characters.
Reset credentials for docker.
Worked! Weird bug.
I'm trying to run this command :
docker daemon --insecure-registry 192.168.99.100:5000
but i'm gettin ghe following error:
exec: "dockerd": executable file not found in %PATH%
I'm using win7 and docker-toolbox 1.12.2 with VM Virtual Box.
What is the problem here?
there is a way to run this command?
That is indeed what issue 27102 report:
Docker Daemon command dockerd not found on latest stable Docker for Mac and Docker Toolbox
(this is for mac but also applies on Windows)
Docker for Mac should probably print a different message, also, we may need to check if the CLI is on the same "host" as the daemon, and print a different message based on that (as running dockerd wont work if the daemon is on a remote server).
the daemon runs in a Linux virtual machine, so you do not need to (and cannot) run it manually. It is already running of the whale is in the top bar.
Conclusion: (Aug. 2021):
I'm closing this ticket, because the current behaviour is as expected.
I think this was originally opened when the docker cli still had a daemon subcommand (during the transition from a single binary to separate binaries for the cli and daemon), which is no longer the case.
The dockerd binary, which is the docker daemon, is not available for macOS (and unlikely will be), because it's a Linux binary that (on Docker Desktop for Mac) runs inside the Docker Desktop VM.
In 2022:
I'm having this exact same issue on the most recent MacOs version (Monterey, Version 12.3.1 (21E258)).
I've uninstalled Docker & reinstalled several times, if I run docker ps or docker run hello-world as paulinechi describes, I get that same error:
docker: Cannot connect to the Docker daemon at `tcp://35.215.110.128:2375`.
Is the docker daemon running?...
Answer:
Make sure you don't have a DOCKER_HOST environment variable set; from that error, it looks like either you have a DOCKER_HOST env-var, or possibly a docker context that defines a non-standard location to connect to the daemon.
The default should be to connect with the Engine API using a unix-socket (unix:///var/run/docker.sock)
Confirmation:
I forgot I was pointing to a DOCKER_HOST on a remote machine that has since shut down.
I'm using Tensorflow on windows 10 with docker (yes, I know Windows 10 isn't supported yet). It performs ok, but only looks like I am only accessing just one of my cpu cores (I have 8). Tensorflow has the ability to assign ops to different devices, so I'd like to be able to get access to all 8. In VirtualBox when I view the settings it only says there is 1 cpu out of the 8 that is configured for the machine. I tried editing the machine to set it to more, but that lead to all sorts of weirdness.
Does anyone know the right way to either create or restart a docker machine to have 8 CPUs? I'm using the docker quickstart container app.
Cheers!!
First you need to ensure you have enabled Virtualization for your machine. You have to do that in the BIOS of your computer.
The link below has a nice video on how to do that, but there are others as well if you google it:
https://www.youtube.com/watch?v=mFJYpT7L5ag
Then you have to stop the docker machine (i.e. the VirtualBox vm) and change the CPU configuration in VirtualBox.
To list the name of your docker machine (it is usually default) run:
docker-machine ls
Then stop the docker machine:
docker-machine stop <machine name>
Next open VirtualBox UI and change the number of CPUs:
Select the docker virtual machine (should be marked as Powered off)
Click Settings->Systems->Processors
Change the number of CPUs
Click OK to save your changes
Restart the docker machine:
docker-machine start <machine name>
Finally you can use the CPU constraint options available for docker run command to restrict CPU usage for your containers if desired.
For example the following command restrict container to use only 3 CPUs:
docker run -ti --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash
More details available in the docker run reference document here.
I just create the machine with all cpus
docker-machine create -d virtualbox --virtualbox-cpu-count=-1 dev
-1 means use all available cpus.