File modification watch (Webpack, Guard...) issue over NFS in Virtual Machine (VM) - docker

I know there are multiple threads discussing NFS mounted volumes and file modification watch issues. Since most of the discussions are old, some from 8 years ago, my goal here is to compile some and bring them up again to check what are the most recent solutions you guys have been using to handle those issues.
The core problem
Linux relies on inotify, a kernel subsystem, to generate events when files are modified (changed/deleted) and those events are most often used by developer tools to watch files to trigger some task. The core issue is that when you have a volume/folder shared via NFS protocol, it doesn't generate the events, and therefore the tools need to use polling methods instead of trigger based on events.
Polling methods often create multiple issues such as high CPU usage, delay to trigger tasks over file alterations, and so on.
Some of the watching tools:
Webpack watch: https://webpack.js.org/configuration/watch/
Guard: https://github.com/guard/guard
Popular threads
inotify with NFS
Why are inotify events different on an NFS mount?
File update in shared folder does not trigger inotify on Ubuntu
Triggering file changes over VirtualBox shared folders
Nice solution attempts
Vagrant file system notification forwarder plugin - ABANDONED AND NOT RELIABLE
vagrant-fsnotify - ABANDONED AND NOT RELIABLE
My current challenge
We run our dev env using macOS as host, Vagrant (provider Virtualbox) with Alpine Linux as the guest, and Docker containers for the services (Node, NGINX...), the setup is running smoothly for everything aside from when the frontend developers need to watch file modifications using webpack watch feature. It works with polling but with a delay of 3-10 seconds.
Any updates or any recommendations on how you solve this problem?

I've been able to work around this issue by setting the actimeo NFS option to 1. This reduces the length of time that NFS caches the file system attributes which seems to keep the host and guest in near sync. Webpack watch now picks up changes for me almost immediately.
Here is the VagrantFile setting I use to implement this NFS share. Note the mount_options setting
config.vm.synced_folder "./my_host_syncd_folder", "/guest/path", type: 'nfs', mount_options: ['actimeo=1']

Related

Dask +SLURM over ftp mount (CurlFtpFS)

So I have a working DASK/SLURM cluster of 4 raspberry Pis with a common NFS share, that I can run Python jobs succesfully.
However, I want to add some more arm devices to my cluster that do not support NFS mounts (Kernel module missing) so I wish to move to fuse based ftp mounts wiht CurlftpFS.
I have setup the mounts sucesfully with anonymous username and without any passwords and the common FTP share can be seen by all the nodes (just as before when it was an NFS share).
I can still run SLURM jobs (since they do not use the share) but when I try to run a DASK job the master node timesout complaining that no worker nodes could be started.
I am not sure what exactly is the problem, since the share it open to anyone for read/write access (e.g. logs and dask queue intermediate files).
Any ideas how I can troubleshoot this?
I don't believe anyone has a cluster like yours!
At a guess, the filesystem access via FUSE, ftp and the pi is much slower than the OS is expecting, and you are seeing the effects of low-level timeouts, i.e., from Dask's point of view it appears that files reads are failing. Dask needs access to storage for configuration and sometimes temporary files. You would want to make sure that these locations are on local storage or tuned off. However, if this is happening during import of modules, which you have on the shared drive by design, there may be no fixing it (python loads many small files during import). Why not use rsync to move the files to the nodes?

Docker for Mac - sync issue

So I have noticed that on Mac there is a huge problem with sync while developing a PHP app. It can take up to 60 seconds before page loads.
As on Mac, Docker uses additional virtual machine I have used http://docker-sync.io to fix it. But I wonder, are you guys having similar issues? Yesterday I have noticed that there is something called File Sharing in Docker settings
img. As I've put my code at /Volumes/Documents/wwwdata should I have to add it also?
As the author of docker-sync, i might be able to give you an comprehensive answer.
Yet, under macOS, there is no solution with native docker for mac tools, to have a somewhat acceptable development environment - which means, sharing source code into the container - during its lifetime.
The main reasons are, that read and write speed on mounted volumes in docker for mac is extremely slow, see the performance comparison . This said, you could mount a volume using -v or volumes into a normal container, but this will be extremely slow. virtualbox or fusion shares are slow out of the same reasons, OSXFS even right now performs better then those, but still is horrible slow.
Docker-sync tries to detach the slow read/write speed from OSXFS by using unison as sync, not direct mount:
Long story short:
Docker for mac is still (very) slow, this hold even for High Sierra with APFS - unusable for development purposes.
The "folder" you are looking at and named "images" are nothing more then OSXFS based mounts into the hyperkit container, so just what it has been used in the past, you just now can configure other folders to be OSXFS synced and available to be mounted then the default ones. So this will not help you at all either.
To make this answer more balanced towards the general case, you find alternatives to docker-sync here - the amount of alternatives also tells you, that there is ( still ) a huge issue in docker-for-mac, it's not docker-sync made up.

Docker for Mac — Extremely slow request times

My .dockerignore is setup to ignore busy directories, but altering a single file seems to have a huge impact on the run performance.
If I make a change to a single, non-dependent file (for example .php or .jpg) in the origin directory, the performance of the next request is really slow.
Subsequent requests are fast, until I make a change to any file in the origin directory and then request times return to ~10s.
Neither :cached or :delegated make any difference
Is there anyway to speed this up? It seems like Docker is doing a lot in the background considering only one file has been changed?
The .dockerignore file does not affect volume mounts. It is only used when sending context to the Docker daemon during image builds. So that is not a factor here.
Poor performance in some situations is a longstanding known issue in Docker for Mac. They discuss this topic in the documentation. In my experience, the worst performance happens with fs event scanners, i.e. you are watching some directory for changes and reloading an app server in response. My way of dealing with that is to disable the fs event watcher and restart the app server manually when that's needed. (May or may not be practical for your situation.)
The short answer is that you can try third party solutions, or you can accept the poor performance in development, realizing it won't follow you to production (which is probably not going to be on the Mac platform).
I ran into a similar issue but on Windows. The way I got around it was to use vagrant. Vagrant has great support for provisioning using Docker. In your Vagrantfile set up the shared directory to use rsync. This will copy over the directories on the VM. Docker can access these directories quickly when in memory on the VM.
This is a great article that helped me come to this conclusion: http://blog.zenika.com/2014/10/07/setting-up-a-development-environment-using-docker-and-vagrant/
More information on provisioning vagrant using docker: https://www.vagrantup.com/docs/provisioning/docker.html
More information on vagrant rsync: https://www.vagrantup.com/docs/synced-folders/rsync.html
I hope this helps you as much as it did me.

Docker, what is it and what is the purpose

I've heard about Docker some days ago and wanted to go across.
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Can it replace a virtual machine dedicated to development?
What is the purpose, in simple words, of using Docker in companies? The main advantage?
VM: Using virtual machine (VM) software, for example, Ubuntu can be installed inside a Windows. And they would both run at the same time. It is like building a PC, with its core components like CPU, RAM, Disks, Network Cards etc, within an operating system and assemble them to work as if it was a real PC. This way, the virtual PC becomes a "guest" inside an actual PC which with its operating system, which is called a host.
Container: It's same as above but instead of using an entire operating system, it cut down the "unnecessary" components of the virtual OS to create a minimal version of it. This lead to the creation of LXC (Linux Containers). It therefore should be faster and more efficient than VMs.
Docker: A docker container, unlike a virtual machine and container, does not require or include a separate operating system. Instead, it relies on the Linux kernel's functionality and uses resource isolation.
Purpose of Docker: Its primary focus is to automate the deployment of applications inside software containers and the automation of operating system level virtualization on Linux. It's more lightweight than standard Containers and boots up in seconds.
(Notice that there's no Guest OS required in case of Docker)
[ Note, this answer focuses on Linux containers and may not fully apply to other operating systems. ]
What is a container ?
It's an App: A container is a way to run applications that are isolated from each other. Rather than virtualizing the hardware to run multiple operating systems, containers rely on virtualizing the operating system to run multiple applications. This means you can run more containers on the same hardware than VMs because you only have one copy of the OS running, and you do not need to preallocate the memory and CPU cores for each instance of your app. Just like any other app, when a container needs the CPU or Memory, it allocates them, and then frees them up when done, allowing other apps to use those same limited resources later.
They leverage kernel namespaces: Each container by default will receive an environment where the following are namespaced:
Mount: filesystems, / in the container will be different from / on the host.
PID: process id's, pid 1 in the container is your launched application, this pid will be different when viewed from the host.
Network: containers run with their own loopback interface (127.0.0.1) and a private IP by default. Docker uses technologies like Linux bridge networks to connect multiple containers together in their own private lan.
IPC: interprocess communication
UTS: this includes the hostname
User: you can optionally shift all the user id's to be offset from that of the host
Each of these namespaces also prevent a container from seeing things like the filesystem or processes on the host, or in other containers, unless you explicitly remove that isolation.
And other linux security tools: Containers also utilize other security features like SELinux, AppArmor, Capabilities, and Seccomp to limit users inside the container, including the root user, from being able to escape the container or negatively impact the host.
Package your apps with their dependencies for portability: Packaging an application into a container involves assembling not only the application itself, but all dependencies needed to run that application, into a portable image. This image is the base filesystem used to create a container. Because we are only isolating the application, this filesystem does not include the kernel and other OS utilities needed to virtualize an entire operating system. Therefore, an image for a container should be significantly smaller than an image for an equivalent virtual machine, making it faster to deploy to nodes across the network. As a result, containers have become a popular option for deploying applications into the cloud and remote data centers.
Can it replace a virtual machine dedicated to development ?
It depends: If your development environment is running Linux, and you either do not need access to hardware devices, or it is acceptable to have direct access to the physical hardware, then you'll find a migration to a Linux container fairly straight forward. The ideal target for a docker container are applications like web based API's (e.g. a REST app), which you access via the network.
What is the purpose, in simple words, of using Docker in companies ? The main advantage ?
Dev or Ops: Docker is typically brought into an environment in one of two paths. Developers looking for a way to more rapidly develop and locally test their application, and operations looking to run more workload on less hardware than would be possible with virtual machines.
Or Devops: One of the ideal targets is to leverage Docker immediately from the CI/CD deployment tool, compiling the application and immediately building an image that is deployed to development, CI, prod, etc. Containers often reduce the time to move the application from the code check-in until it's available for testing, making developers more efficient. And when designed properly, the same image that was tested and approved by the developers and CI tools can be deployed in production. Since that image includes all the application dependencies, the risk of something breaking in production that worked in development are significantly reduced.
Scalability: One last key benefit of containers that I'll mention is that they are designed for horizontal scalability in mind. When you have stateless apps under heavy load, containers are much easier and faster to scale out due to their smaller image size and reduced overhead. For this reason you see containers being used by many of the larger web based companies, like Google and Netflix.
Same questions were hitting my head some days ago and what i found after getting into it, let's understand in very simple words.
Why one would think about docker and containers when everything seems fine with current process of application architecture and development !!
Let's take an example that we are developing an application using nodeJs , MongoDB, Redis, RabbitMQ etc services [you can think of any other services].
Now we face these following things as problems in application development and shipping process if we forget about existence of docker or other alternatives of containerizing applications.
Compatibility of services(nodeJs, mongoDB, Redis, RabbitMQ etc.) with OS(even after finding compatible versions with OS, if something unexpected happens related to versions then we need to relook the compatibility again and fix that).
If two system components requires a library/dependency with different versions in application in OS(That need a relook every time in case of an unexpected behaviour of application due to library and dependency version issue).
Most importantly , If new person joins the team, we find it very difficult to setup the new environment, person has to follow large set of instructions and run hundreds of commands to finally setup the environment And it takes time and effort.
People have to make sure that they are using right version of OS and check compatibilities of services with OS.And each developer has to follow this each time while setting up.
We also have different environment like dev, test and production.If One developer is comfortable using one OS and other is comfortable with other OS And in this case, we can't guarantee that our application will behave in same way in these two different situations.
All of these make our life difficult in process of developing , testing and shipping the applications.
So we need something which handles compatibility issue and allows us to make changes and modifications in any system component without affecting other components.
Now we think about docker because it's purpose is to
containerise the applications and automate the deployment of applications and ship them very easily.
How docker solves above issues-
We can run each service component(nodeJs, MongoDB, Redis, RabbitMQ) in different containers with its own dependencies and libraries in the same OS but with different environments.
We have to just run docker configuration once then all our team developers can get started with simple docker run command, we have saved lot of time and efforts here:).
So containers are isolated environments with all dependencies and
libraries bundled together with their own process and networking
interfaces and mounts.
All containers use the same OS resources
therefore they take less time to boot up and utilise the CPU
efficiently with less hardware costs.
I hope this would be helpful.
Why use docker:
Docker makes it really easy to install and running software without worrying about setup or dependencies. Docker is really made it easy and really straight forward for you to install and run software on any given computer not just your computer but on web servers as well or any cloud based computing platform. For example when I went to install redis in my computer by using bellow command
wget http://download.redis.io/redis-stable.tar.gz
I got error,
Now I could definitely go and troubleshoot this install that program and then try installing redis again, and I kind of get into endless cycle of trying to do all bellow troubleshooting as you I am installing and running software.
Now let me show you how easy it is to run read as if you are making use of Docker instead. just run the command docker run -it redis, this command will install docker without any error.
What docker is:
To understand what is docker you have to know about docker Ecosystem.
Docker client, server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis this which is what the image that you just downloaded was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
Let me try to provide as simple answers as possible:
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Simply put: a package containing software. More specifically, an application and all its dependencies bundled together. A regular, non-dockerised application environment is hooked directly to the OS, whereas a Docker container is an OS abstraction layer.
And a container differs from an image in that a container is a runtime instance of an image - similar to how objects are runtime instances of classes in case you're familiar with OOP.
Can it replace a virtual machine dedicated to development?
Both VMs and Docker containers are virtualisation techniques, in that they provide abstraction on top of system infrastructure.
A VM runs a full “guest” operating system with virtual access to host resources through a hypervisor. This means that the VM often provides the environment with more resources than it actually needs In general, VMs provide an environment with more resources than most applications need. Therefore, containers are a lighter-weight technique. The two solve different problems.
What is the purpose, in simple words, of using Docker in companies?
The main advantage?
Containerisation goes hand-in-hand with microservices. The smaller services that make up the larger application are often tested and run in Docker containers. This makes continuous testing easier.
Also, because Docker containers are read-only they enforce a key DevOps principle: production services should remain unaltered
Some general benefits of using them:
Great isolation of services
Great manageability as containers contain everything the app needs
Encapsulation of implementation technology (in the containers)
Efficient resource utilisation (due to light-weight os virtualisation) in comparison to VMs
Fast deployment
If you don't have any prior experience with Docker this answer will cover the basics needed as a developer.
Docker has become a standard tool for DevOps as it is an effective application to improve operational efficiencies. When you look at why Docker was created and why it is very popular, it is mostly for its ability to reduce the amount of time it takes to set up the environments where applications run and are developed.
Just look at how long it takes to set up an environment where you have React as the frontend, a node and express API for backend, which also needs Mongo. And that's just to start. Then when your team grows and you have multiple developers working on the same front and backend and therefore they need to set up the same resources in their local environment for testing purposes, how can you guarantee every developer will run the same environment resources, let alone the same versions? All of these scenarios play well into Docker's strengths where it's value comes from setting containers with specific settings, environments and even versions of resources. Simply type a few commands to have Docker set up, install, and run your resources automatically.
Let's briefly go over the main components. A container is basically where your application or specific resource is located. For example, you could have the Mongo database in one container, then the frontend React application, and finally your node express server in the third container.
Then you have an image, which is from what the container is built. The images contains all the information that a container needs to build a container exactly the same way across any systems. It's like a recipe.
Then you have volumes, which holds the data of your containers. So if your applications are on containers, which are static and unchanging, the data that change is on the volumes.
And finally, the pieces that allow all these items to speak is networking. Yes, that sounds simple, but understand that each container in Docker have no idea of the existence of each container. They're fully isolated. So unless we set up networking in Docker, they won't have any idea how to connect to one and another.
There are really good answers above which I found really helpful.
Below I had drafted a simpler answer:
Reasons to dockerize my web application?
a. One OS for multiple applications ( Resources are shared )
b. Resource manangement ( CPU / RAM) is efficient.
c. Serverless Implementation made easier -Yes, AWS ECS with Fargate, But serverless can be achieved with Lamdba
d. Infra As Code - Agree, but IaC can be achieved via Terraforms
e. "It works in my machine" Issue
Still, below questions are open when choosing dockerization
A simple spring boot application
a. Jar file with size ~50MB
b. creates a Docker Image ~500MB
c. Cant I simply choose a small ec2 instance for my microservices.
Financial Benefits (reducing the individual instance cost) ?
a. No need to pay for individual OS subscription
b. Is there any monetary benefit like the below implementation?
c. let say select t3.2xlarge ( 8 core / 32 GB) and start 4-5 docker images ?

Grails watch files doesn't work inside Docker container running inside a Vagrant virtual machine

I have a fairly nested structure:
MacOSX workstation running a...
Vagrant VirtualBox virtual machine with ubuntu/trusty64 running a...
Docker container running...
my application written in Grails
Every layer is configured in such a way as to share a portion of the file system from the layer above. This way:
Vagrant, with config.vm.synced_folder directive in Vagrantfile
Docker, with the -v command like switch and VOLUME directive in the Dockerfile
This way I can do development on my workstation and the Grails application at the bottom should (ideally) detect changes and recompile/reload on the fly. This is a feature that used to work when I was running the same application straight on the MacOSX, but now grails seems totally unaware of file changes. Of course, if I open the files with an editor (inside the Docker container) they are indeed changed and in fact if I stop/restart the grails app the new code is used.
I don't know how grails implements the watch strategy, but if it depends on some operating system level feature I suspect that file change notifications get lost somewhere in the chain.
Anyone has an idea of what could be the cause and/or how I could go about debugging this?
There are two ways to detect file changes (that I'm aware of):
Polling, which means checking timestamps of all files in a folder at a certain interval. Getting to "near instant" change detection requires very short intervals. This is CPU and disk intensive.
OS Events (inotify on Linux, FSEvents on OS X), where changes are detectable because file operations pass through the OS subsystems. This is easy on the CPU and disk.
Network File Systems (NFS) and the like don't generate events. Since file changes do not pass through the guest OS subsystem, the OS is not aware of changes; only the OS making the changes (OS X) knows about them.
Grails and many other File Watcher tools depend on FSEvents or inotify (or similar) events.
So what to do? It's not practical to 'broadcast' NFS changes from host to all guests under normal circumstances, considering the traffic that would potentially generate. However, I am of the opinion that VirtualBox shares should count as a special exception...
A mechanism to bridge this gap could involve a process that watches the host for changes and triggers a synchronization on the guest.
Check these articles for some interesting ideas and solutions, involving some type of rsync operation:
http://drunomics.com/en/blog/syncd-sync-changes-vagrant-box (Linux)
https://github.com/ggreer/fsevents-tools (OS X)
Rsync-ing to a non-NFS folder on your guest (Docker) instance has the additional advantage that I/O performance increases dramatically. VirtualBox shares are just painfully slow.
Update!
Here's what I did. First install lsyncd (OS X example, more info at http://kesar.es/tag/lsyncd/):
brew install lsyncd
Inside my Vagrant folder on my Mac, I created the file lsyncd.lua:
settings {
logfile = "./lsyncd.log",
statusFile = "./lsyncd.status",
nodaemon = true,
pidfile = "./lsyncd.pid",
inotifyMode = "CloseWrite or Modify",
}
sync {
default.rsync,
delay = 2,
source = "./demo",
target = "vagrant#localhost:~/demo",
rsync = {
binary = "/usr/bin/rsync",
protect_args = false,
archive = true,
compress = false,
whole_file = false,
rsh = "/usr/bin/ssh -p 2222 -o StrictHostKeyChecking=no"
},
}
What this does, is sync the folder demo inside my Vagrant folder to the guest OS in /home/vagrant/demo. Note that you need to set up login with SSH keys to make this process frictionless.
Then, with the vagrant VM running, I kicked off the lsyncd process. The -log Exec is optional; it logs its activity to the stdout:
sudo lsyncd lsyncd.lua -log Exec
On the vagrant VM I started Grails (2.4.4) in my synced folder:
cd /home/vagrant/demo
grails -reloading run-app
Back on my Mac in IntelliJ I edited a Controller class. It nearly immediately triggered lsyncd (2 sec delay) and quickly after that I confirmed Grails recompiled the class!
To summarize:
Edit your project files on your Mac, execute on your VM
Use lsyncd to rsync your changes to a folder inside your VM
Grails notices the changes and triggers a reload
Much faster disk performance by not using VirtualBox share
Issues: Textmate triggers a type of FSEvent that lsyncd does not (yet) recognize, so changes are not detected. Vim and IntelliJ were fine, though.
Hope this helps someone! Took me a day to figure this stuff out.
The best way I have found to have filesystem notifications visible within the container was as follows:
Create two folders, one to map the project and a "mirror"
Map the project in the first folder
Keep a background script running in the container, rsyncing the project folder to "mirror"
Run the project from "mirror"
It may not be the most efficient or most elegant way, but this way was transparent to users of the container. No additional script needs to be run.
Not tested in a larger project, but in my case I did not realize performance issues.
https://github.com/altieres/docker-jekyll-s3
Vagrant already includes some options for rsync, so it's not necessary to install a special program on the host machine.
In Vagrantfile, I configured rsync:
config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: [ "./build", ".git/" ]
Then on command line (in host), I run:
vagrant rsync
Which performs a single synchronization from host to guest.
or
vagrant rsync-auto
Which runs automatically when changes on host are detected.
See more at Vagrant rsync Documentation and rsync-auto

Resources