So I have noticed that on Mac there is a huge problem with sync while developing a PHP app. It can take up to 60 seconds before page loads.
As on Mac, Docker uses additional virtual machine I have used http://docker-sync.io to fix it. But I wonder, are you guys having similar issues? Yesterday I have noticed that there is something called File Sharing in Docker settings
img. As I've put my code at /Volumes/Documents/wwwdata should I have to add it also?
As the author of docker-sync, i might be able to give you an comprehensive answer.
Yet, under macOS, there is no solution with native docker for mac tools, to have a somewhat acceptable development environment - which means, sharing source code into the container - during its lifetime.
The main reasons are, that read and write speed on mounted volumes in docker for mac is extremely slow, see the performance comparison . This said, you could mount a volume using -v or volumes into a normal container, but this will be extremely slow. virtualbox or fusion shares are slow out of the same reasons, OSXFS even right now performs better then those, but still is horrible slow.
Docker-sync tries to detach the slow read/write speed from OSXFS by using unison as sync, not direct mount:
Long story short:
Docker for mac is still (very) slow, this hold even for High Sierra with APFS - unusable for development purposes.
The "folder" you are looking at and named "images" are nothing more then OSXFS based mounts into the hyperkit container, so just what it has been used in the past, you just now can configure other folders to be OSXFS synced and available to be mounted then the default ones. So this will not help you at all either.
To make this answer more balanced towards the general case, you find alternatives to docker-sync here - the amount of alternatives also tells you, that there is ( still ) a huge issue in docker-for-mac, it's not docker-sync made up.
Related
I am using Docker Desktop for Windows on Windows 10.
I was experiencing issues with system SSD always being full and moved 'docker-desktop-data' distro (which is used to store docker images and other stuff) out of the system drive to drive D: which is HDD using this guide.
Finally, I was happy to have a lot of space on my SSD... but docker containers started to work slower. I guess this happens due to HDD write/read operations being slower than on SSD.
Is there a better way to solve the problem of the continuously growing size of Docker distro's without impacting how fast containers actually work and images are built?
Actually only be design. As you know, a docker container is layered. So it might be feasible to check if it is possible to create something like a "base-container" from which your actual image in derived.
Also it might be sensible to check if your base distro is small enough. I often have seen containers created from full blown Debian or Ubuntu distros. Thats not the best idea. Try to derive from an alpine version or check for even smaller approaches.
Using Docker Desktop (19.03.13) with 6 containers in Windows 10. Having 16GB RAM.
In docker stats each container consumes 20-500 mb, all together cunsume ~1gb.
But in the Task Manager docker eats ~10gb and crashes from the lack of system memory.
How to check, what consumes so much memory in docker?
And how to prevent this?
Try to create a .wslconfig file at the root of your User folder C:\Users\<my-user> to adjust how much memory & processors Docker will use.
This is the content of the .wslconfig file.
[wsl2]
memory=2GB # Limits VM memory in WSL 2 up to 2GB
processors=2# Makes the WSL 2 VM use two virtual processors
Then, restart the computer. You will find the Vemm process will only take the amount of RAM you defined previously.
You can learn more here here
I guess you are using the new WSL 2 based engine, try switching docker engine back to Hyper-V by going opening docker settings -> general -> uncheck Use WSL 2 based Engine .
To explain:
I noticed it started happening to me since WSL 2 engine was introduced, i automatically switched to it since it's a new engine; Memory issues started arising since then.
Restarting/closing docker did not free the memory and i noticed in task Manager Vemm was the one eating all memory, so had to force close it (caused docker not to work).
Last thing i did was switching docker engine back to Hyper-V solved my high memory usage.
If you are using WSL2 put into the .wslconfig the middle of your ram. I don't know why but I had the same problem with 8GB RAM.
This is my .wslconfig
[wsl2]
memory=4GB # I have 8GB RAM
processors=2
And the result was good because the consumption is good! In this moment I have running a Docker with 8 images:
Although this problem is already marked as SOLVED
There is still another reason for this, in recently updated versions.
You might enable too many resources for docker hyperkit.
Go to settings - resources - advanced
check if you spared too much resource there.
I have my docker taking less than 2% cpu now.
After updating .wslconfig to be:
[wsl2]
memory=8GB
swap=2000
processors=4
... and then restarting Docker, the CPU consumption was still over 80% and there were 5 Docker Desktop processes (each taking 17-18%) in Windows Task Manager. I reset Docket to Factory and still the CPU pegged at 80% or more.
I then deleted the .docker folder (in windows the path is %USERPROFILE%/.docker) as suggested by jmichalek-fp. I took care to do a Shift-DEL so as not to move it to the recylce bin because I remember in the past recycled items were still found by processes that hold a link to the file.
After Factory Reset, then increasing .wslconfig resources, then deleting .docker folder and then restarting Docker, it is now running only one Docker Desktop process, and, with a NodeJs app running in it, it is consuming between 0.5% and 2% CPU.
I found "delete .docker folder" in this github issue: https://github.com/docker/for-win/issues/12266
As I know docker stats does not show RAM reservations. Try to put RAM limits using -m flag. There are some information how to control resources using docker:
https://docs.docker.com/config/containers/resource_constraints/?spm=a2c41.12663380.0.0.59ed566dAqUZPu
I am guessing on Windows there is something similar to what exists on MacOS.
Open your docker app and go to the dashboard
Click any container
Click Stats
You will get information regarding your CPU, RAM Usage, Disk Read & Write Memory & Network usage.
When I had memory issues, which I used to frequently, I would setup alias scripts that I could chain together to stop/kill/restart and do what ever setup I needed on the containers.
There is no preventing docker behaving the way it behaves unless you want to start contributing to and making pull requests. This isn't an uncommon issue. Docker is a free service, I recommend working around it's short comings.
I have noticed that assets requests are very, very slow on my Rails app. When volumes are inside docker image it takes around 20 ms to get asset file. When I am starting container and mount files, it takes around 400 ms to fetch them!
Docker filesystem is slow, but rails app boot time is pretty much same in both cases, so its not necessary the reason. Do you have an idea what could be reason here?
I had the same issue and was like impossible work in development environment with a Dockerized Rails application because on Mac is terrifically slow.
This is a known issue, Docker is very slow on Mac and Windows, in particular due to Volumes mounting.
First of all we took some precautions:
Be sure that you are not mounting big files or folders. For example my log directory size was 10GB! You can install ncdu to find big files/folders, follow this: https://maketips.net/tip/461/docker-build-is-slow-or-docker-compose-build-is-slow
Check if you are facing this network known issue: https://github.com/docker/compose/issues/3419#issuecomment-221793401
Anyway above precautions didn't helped too much.
The big improvement was adding docker-sync gem!
Check-out this: http://docker-sync.io/
Basically with this gem you are using a different approach to sync folders between your machine and app container. This works very well and now everything is very fast, almost similar to Linux performance!
My .dockerignore is setup to ignore busy directories, but altering a single file seems to have a huge impact on the run performance.
If I make a change to a single, non-dependent file (for example .php or .jpg) in the origin directory, the performance of the next request is really slow.
Subsequent requests are fast, until I make a change to any file in the origin directory and then request times return to ~10s.
Neither :cached or :delegated make any difference
Is there anyway to speed this up? It seems like Docker is doing a lot in the background considering only one file has been changed?
The .dockerignore file does not affect volume mounts. It is only used when sending context to the Docker daemon during image builds. So that is not a factor here.
Poor performance in some situations is a longstanding known issue in Docker for Mac. They discuss this topic in the documentation. In my experience, the worst performance happens with fs event scanners, i.e. you are watching some directory for changes and reloading an app server in response. My way of dealing with that is to disable the fs event watcher and restart the app server manually when that's needed. (May or may not be practical for your situation.)
The short answer is that you can try third party solutions, or you can accept the poor performance in development, realizing it won't follow you to production (which is probably not going to be on the Mac platform).
I ran into a similar issue but on Windows. The way I got around it was to use vagrant. Vagrant has great support for provisioning using Docker. In your Vagrantfile set up the shared directory to use rsync. This will copy over the directories on the VM. Docker can access these directories quickly when in memory on the VM.
This is a great article that helped me come to this conclusion: http://blog.zenika.com/2014/10/07/setting-up-a-development-environment-using-docker-and-vagrant/
More information on provisioning vagrant using docker: https://www.vagrantup.com/docs/provisioning/docker.html
More information on vagrant rsync: https://www.vagrantup.com/docs/synced-folders/rsync.html
I hope this helps you as much as it did me.
I'm not realy good at administrative tasks. I need couple of tomcat, LAMP, node.js servers behind ngnix. For me it seems really complicated to set everything on the system directly. I'm thinking about containerize the server. Install Docker and create ngnix container, node.js container etc.
I expecting it to be more easy to manage, only routing to the first ngnix maybe a little bit hassle. It will bring me also possibility to backup, add servers etc. easily. Not to forget about remote deployment and management. And also repeatability of the server setup task. Separation will probably shield me from recoprocating problem of completely breaking server, by changing some init script, screwing some app. server setup etc.
Are my expectation correct that Docker will abstract me little bit more from the "raw" system administration.
Side question is there anywhere some administrative GUI I can run and easily deploy, start/stop, interconnect the containers?
UPDATE
I found nice note here
By containerizing Nginx, we cut down on our sysadmin overhead. We will no longer need to manage Nginx through a package manager or build it from source. The Docker container allows us to simply replace the whole container when a new version of Nginx is released. We only need to maintain the Nginx configuration file and our content.
Yes docker will do this for you, but that does not mean, you will no longer administrate the OS for the services you run.
Its more that docker simplifies that management because you:
do not need to pick a specific OS for all of our services, which will enforce you to offside install a service because it has been not released for the OS of your choice. You would have the wrong version and so on. Instead, Docker will provide you the option, to pick the right OS or OS version ( debian wheezy, jessie or ubuntu 12.x, 14.x 16.x ) for the service in question. (Or even alpine)
Also, docker offers you pre-made images to avoid that you need remake the image for nginx, mysql, nodejs and so on. You find those on https://hub.docker.com
Docker makes it very easy and convenient to remove a service again, not littering your system by any means (over time).
Docker offers you better "mobility" you can easily move the stack or replicate it on a different host - you do not need to reconfigure the host and hope it to "be the same".
With Docker you do not need to think about the convergence of containers during their live time / or stack improvements, since they are remade from the image again and again - from the scratch, no convergence.
But, docker also (con)
Adds more complexity since you might run "more microservices". You might need a service-discovery, live configuration system and you need to understand the storage system ( volumes ) quiet a bit
Docker does not "remove" the OS-Layer, it just makes it simpler. Still you need to maintain
Volumes in general might feel not as simple as local file storage ( depends on what you choose )
GUI
I think the most compelling thing would match what you define a "GUI" is, is rancher http://rancher.com/ - its more then a GUI, its the complete docker-server management stack. High learning curve first, a lot of gain afterwards
You will still need to manage the docker host OS. Operations like:
Adding Disks from time to time.
Security Updates
Rotating Logs
Managing Firewall
Monitoring via SNMP/etc
NTP
Backups
...
Docker Advantages:
Rapid application deployment
Portability across machines
Version control and component reuse
Lightweight footprint and minimal overhead
Simplified maintenance
...
Docker Disadvantages:
Adds Complexity (Design, Implementation, Administration)
GUI tools available, some of them are:
Kitematic -> windows/mac
Panamax
Lorry.io
docker ui
...
Recommendation: Start Learning Docker CLI as the GUI tools don't have all the nifty CLI features.