Rails assets are very slow on MacOX Docker - ruby-on-rails

I have noticed that assets requests are very, very slow on my Rails app. When volumes are inside docker image it takes around 20 ms to get asset file. When I am starting container and mount files, it takes around 400 ms to fetch them!
Docker filesystem is slow, but rails app boot time is pretty much same in both cases, so its not necessary the reason. Do you have an idea what could be reason here?

I had the same issue and was like impossible work in development environment with a Dockerized Rails application because on Mac is terrifically slow.
This is a known issue, Docker is very slow on Mac and Windows, in particular due to Volumes mounting.
First of all we took some precautions:
Be sure that you are not mounting big files or folders. For example my log directory size was 10GB! You can install ncdu to find big files/folders, follow this: https://maketips.net/tip/461/docker-build-is-slow-or-docker-compose-build-is-slow
Check if you are facing this network known issue: https://github.com/docker/compose/issues/3419#issuecomment-221793401
Anyway above precautions didn't helped too much.
The big improvement was adding docker-sync gem!
Check-out this: http://docker-sync.io/
Basically with this gem you are using a different approach to sync folders between your machine and app container. This works very well and now everything is very fast, almost similar to Linux performance!

Related

How to speed up file change from host into docker container?

My host is MacOS with DockerDesktop. I have a Debian container in which a PHP application is running. Parts of the PHP application are part of the docker image, the parts I am still working on are shared with the host through a volume. Think of
docker run -td --name my-app -v /Users/me/mycode:/var/www/html/phpApp/variableParts
My problem: When I save a change on the host it takes some 10-15 seconds until this change becomes available to the containerized app. So (1) after every save it takes (too) long waiting for the code to be available and (2) I cannot be sure whether I already see the new code running or still the old one.
My problem is not that the execution of the application is slow (as some sources in the web suggest), in fact it is quite fast. My problem is that the time for the change to propagate from the host to the docker container is too long. Earlier I developed and had the code from the remote server NFS-mounted on my developing machine and there it was blazing fast.
Is there any way I can reasonably speed this up? Or does a different workflow make more sense? Would mounting the code parts I want to edit from the container (as NFS server) to the host (where the editor runs) make sense?
My workflow consists of many small adaptations to be made to the PHP code, so waitint 10-15 seconds after every edit is a no-go.
I have used Docker on Mac, and have seen edits to a bind mount propagate to the Docker container in under a second, so I think Docker is not to blame here.
Instead, I would look at any caching that PHP is doing. Is PHP reloading your code from disk on every page view, or does it cache it? For example, the opcache feature of PHP keeps a pre-compiled version of your PHP code in memory, and occasionally checks if that version is still up to date. Take a look at your php.ini, and in particular what opcache.revalidate_freq is set to.

.Net app under docker: significant delay in writing logs to the file shared with hosting system

We have .net app that writes logs with help of Nlog logger to file source, logs are recorded all the time, every second. If you run it on Windows, no dockers, everything works fine: log records appear in the file immediately, but being deployed under our cluster of Linux dockers it takes from several minutes to hours to flash data into the file, which is shared with our host system. I can see data in the database, indicating that app ran successfully, but log file is not changed for a while. Having very little experience with dockers, not sure what it could be caused by, and even where to look at. I found yaml file that looks like this:
mount -v -t cifs //10.153.1.61/apps/configs/stage/testApp/logs
/logs/ -o credentials=/smb/smbcredentials;
As it works fine without docker, I believe something is wrong in the way we create images and deploy dockers. Any ideas on where to direct the investigation is very appreciated.
I think your see the slowness because you are trying to write your logs over the CIFS network share, that's a lot of overhead. You should consider using one of the commonly available distributed log handling solutions out there, such as gray log, ELK or Splunk(https://www.splunk.com/)
Looks like NLog has a lot of integrations to choose from. There are very detailed step by step tutorials available that explain the process in fine detail.
Using the centralized log collection not only will speed things up for you, it will let you query / combine logs from multiple containers and build graphs / dashboards giving more insight to you about the current status of your system.

Avoiding a constant "stop, build, up" loop when developing with Docker locally

I've recently delved into the wonder that is Docker and have set up my containers using docker-compose (an Express app and a MySQL DB).
It's been great for sharing the project with the team and pushing to a VPS, but one thing that's fast becoming tedious is the need to stop the running app, docker-compose build then docker-compose up any time there are changes to the code (which I believe is also creating numerous unnecessary images?).
I've scoured about but haven't found a clear-cut way to get around this, barring ditching Docker-compose locally and using docker run to run the Express app pointing to a local DB (which would do away with a lot of the easy set up perks that come with Docker, such as building the DB from scratch).
Is there a Nodemon-style way of working with Docker (images/containers get updates automatically when code changes)? Is there something obvious I'm missing? Or is my approach the necessary "evil" that comes with working on a Dockerised app?
You can mount a volume to your source directory for local development. Any changes on the host will be reflected in the container. https://docs.docker.com/storage/volumes/
You might consider separate services for deployment/development. I usually have a separate service which mounts the source directory and installs test dependencies inside the container.

Docker for Mac - sync issue

So I have noticed that on Mac there is a huge problem with sync while developing a PHP app. It can take up to 60 seconds before page loads.
As on Mac, Docker uses additional virtual machine I have used http://docker-sync.io to fix it. But I wonder, are you guys having similar issues? Yesterday I have noticed that there is something called File Sharing in Docker settings
img. As I've put my code at /Volumes/Documents/wwwdata should I have to add it also?
As the author of docker-sync, i might be able to give you an comprehensive answer.
Yet, under macOS, there is no solution with native docker for mac tools, to have a somewhat acceptable development environment - which means, sharing source code into the container - during its lifetime.
The main reasons are, that read and write speed on mounted volumes in docker for mac is extremely slow, see the performance comparison . This said, you could mount a volume using -v or volumes into a normal container, but this will be extremely slow. virtualbox or fusion shares are slow out of the same reasons, OSXFS even right now performs better then those, but still is horrible slow.
Docker-sync tries to detach the slow read/write speed from OSXFS by using unison as sync, not direct mount:
Long story short:
Docker for mac is still (very) slow, this hold even for High Sierra with APFS - unusable for development purposes.
The "folder" you are looking at and named "images" are nothing more then OSXFS based mounts into the hyperkit container, so just what it has been used in the past, you just now can configure other folders to be OSXFS synced and available to be mounted then the default ones. So this will not help you at all either.
To make this answer more balanced towards the general case, you find alternatives to docker-sync here - the amount of alternatives also tells you, that there is ( still ) a huge issue in docker-for-mac, it's not docker-sync made up.

Docker for Mac — Extremely slow request times

My .dockerignore is setup to ignore busy directories, but altering a single file seems to have a huge impact on the run performance.
If I make a change to a single, non-dependent file (for example .php or .jpg) in the origin directory, the performance of the next request is really slow.
Subsequent requests are fast, until I make a change to any file in the origin directory and then request times return to ~10s.
Neither :cached or :delegated make any difference
Is there anyway to speed this up? It seems like Docker is doing a lot in the background considering only one file has been changed?
The .dockerignore file does not affect volume mounts. It is only used when sending context to the Docker daemon during image builds. So that is not a factor here.
Poor performance in some situations is a longstanding known issue in Docker for Mac. They discuss this topic in the documentation. In my experience, the worst performance happens with fs event scanners, i.e. you are watching some directory for changes and reloading an app server in response. My way of dealing with that is to disable the fs event watcher and restart the app server manually when that's needed. (May or may not be practical for your situation.)
The short answer is that you can try third party solutions, or you can accept the poor performance in development, realizing it won't follow you to production (which is probably not going to be on the Mac platform).
I ran into a similar issue but on Windows. The way I got around it was to use vagrant. Vagrant has great support for provisioning using Docker. In your Vagrantfile set up the shared directory to use rsync. This will copy over the directories on the VM. Docker can access these directories quickly when in memory on the VM.
This is a great article that helped me come to this conclusion: http://blog.zenika.com/2014/10/07/setting-up-a-development-environment-using-docker-and-vagrant/
More information on provisioning vagrant using docker: https://www.vagrantup.com/docs/provisioning/docker.html
More information on vagrant rsync: https://www.vagrantup.com/docs/synced-folders/rsync.html
I hope this helps you as much as it did me.

Resources