ng command very slow on docker volume (Windows) - docker

I use Docker Desktop on Windows 10 (WSL) and need to use Angular on a Docker Volume (with the -v option). Everything works correctly, but the "ng" command seems very slow when it's run from the volume.
I first noticed this running ng serve: the command hangs for more than 1 minute with no log (even in verbose mode) before beginning the compilation. But even ng --version hangs for 15 seconds when it's run from any directory in the volume (the version is 8.1.2) - without any error message (and no docker log). If I run ng --version from any other folder in the container (not in the volume), the version is displayed immediately.
Would you know the reason of this delay or any way to understand and solve it?

I suspect that the main issue is due to the fact that ng commands are read/write intensive. That being said, the Visual Studio Code devcontainer doc indicates:
While using this approach to bind mount the local filesystem into a container is convenient, it does have some performance overhead on Windows and macOS. There are some techniques that you can apply to improve disk performance, or you can open a repository in a container using a isolated container volume instead.
Therefore, instead of mounting the current directory, it would be better in that case to clone the repository in an isolated container volume.
To do so, in VS Code, open the command palette by pressing F1 and select Remote-Containers: Clone Repository in Container Volume. This will create a unique volume for your container with your repository inside.
The techniques mentioned in the quote can be found here.

Related

What have to be done to deliver on Docker and avoid to accumulate images?

I use Docker to execute a website I make.
When a release have to be delivered, I have to build a new Docker image and start a new Container from it.
The problem is that images et containers are accumulating and taking huge space.
Besides the delivery, I need to stop the running container and delete it and the source image too.
I don't need Docker command lines but a checklist or a process to not forget anything.
For instance:
-Stop running container
-Delete stopped container
-Delete old image
-Build new image
-Start new container
Am I missing something?
I'm not used to Docker, maybe there are best practices to this pretty classical use case?
The local workflow that works for me is:
Do core development locally, without Docker. Things like interactive debuggers and live reloading work just fine in a non-Docker environment without weird hacks or root access, and installing the tools I need usually involves a single brew or apt-get step. Make all of my pytest/junit/rspec/jest/... tests pass.
docker build a new image.
docker stop && docker rm the old container.
docker run a new container.
When the number of old images starts to bother me, docker system prune.
If you're using Docker Compose, you might be able to replace the middle set of steps with docker-compose up --build.
In a production environment, the sequence is slightly different:
When your CI system sees a new commit, after running the repository's local tests, it docker build && docker push a new image. The image has a unique tag, which could be a timestamp or source control commit ID or version tag.
Your deployment system (could be the CI system or a separate CD system) tells whatever cluster manager you're using (Kubernetes, a Compose file with Docker Swarm, Nomad, an Ansible playbook, ...) about the new version tag. The deployment system takes care of stopping, starting, and removing containers.
If your cluster manager doesn't handle this already, run a cron job to docker system prune.
You should use:
docker system df
to investigate the space used by docker.
After that you can use
docker system prune -a --volumes
to remove unused components. Containers you should stop them yourself before doing this, but this way you are sure to cover everything.

How to Run a Command in a Container Using Local Input Files without Copying

I am new to docker and containers. I have a container consisting of an MRI analysis software. Within this container are many other software the main software draws its commands from. I would like to run a single command from one of the softwares in this container using research data that is located on an external hard drive which is plugged into my local machine that is running docker.
I know there is a cp command for copying files (such as scripts) into containers and most other questions along these lines seem to recommend copying the files from your local machine into the container and then running the script (or whatever) from the container. In my case I need the container to access data from separate folders in a directory structure and copying over the entire directory is not feasible since it is quite large.
I honestly just want to know how I can run a single command inside the docker using inputs present on my local machine. I have run docker ps to get the CONTAINER_ID which is d8dbcf705ee7. Having looked into executing commands inside containers I tried the following command:
docker exec d8dbcf705ee7 /bin/bash -c "mcflirt -in /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold.nii -out sub-S06V1A_task-compound_run-01_bold_mcf_COMMAND_TEST.nii.gz -reffile /Volumes/DISS/FMRIPREP_TMP/sub-S06V1A_dof6_ver1.2.5/fmriprep_wf/single_subject_S06V1A_wf/func_preproc_task_compound_run_01_wf/bold_reference_wf/gen_ref/ref_image.nii.gz -mats -plots"
mcflirt is the command I want to run inside the container. I believe the exec command would do what I hope since if I run docker exec d8dbcf705ee7 /bin/bash -c "mcflirt" I will get help output for the mcflirt command which is the expected outcome in that case. The files inside of the /Volume/... paths are the files on my local machine I would like to access. I understand that the location of the files is the problem since I cannot tab complete the paths within this command; when I run this I get the following output:
Image Exception : #22 :: ERROR: Could not open image /Volumes/DISS/FMRIPREP/sub-S06V1A/func/sub-S06V1A_task-compound_run-01_bold
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
Can anyone point me in the right direction?
So if I got you right, you need to execute some shell script and provide the context (like local files).
The way is straightforward.
Lets say your script and all needed files are located in /hello folder of your host PC (no matter really if they are stored together or not, just showing the technique).
/hello
- runme.sh
- datafile1
- datafile1
You mount this folder into your container to make the files accessible inside. If you dont need container to modify them, better mount in readonly mode.
You launch docker like this:
docker run -it -v /hello:/hello2:ro ubuntu /hello2/runme.sh
And that's it! Your script runme.sh gets executed inside container and it has access to nearby files. Thanks to -v /hello:/hello2:ro directive. It maps host's folder /hello into container's folder /hello2 in readonly ro mode.
Note you can have same names, I've just differed them to show the difference.

tar utime from docker-on-windows does not work

I am trying to do some builds from docker (yocto specifically), and I would like to be able to run it and have the output be in the Windows filesystem. However, no matter I do, utime does not work from the docker container.
(Note that all is fine if I change the output directories to be inside the docker container, instead of two a docker volume mounted to local windows filesystem)
Specifically, from tar (extract): Cannot utime: Operation not permitted (not that adding -m works around this, but I do not have control over all the places tar is called)
Everything I read about this error points to a file permissions issue, but I have tried running docker from a PowerShell as Administrator, and it does not solve the problem.
Is this just a deficiency of the Windows filesystem (NTFS) or is there a way to configure docker volumes to have the required permissions to set utime.

Docker Storage - Getting a Layman's answer

I am just discovering Docker - I am finding so much information, but I can't seem to get a straight answer on this option. If someone could give me a clear explanation based on my understanding I have of it so far it would be appreciated.
I am downloading a docker image locally - say the default one from Microsoft, using microsoft/dotnet-samples:dotnetapp-nanoserver I am lost as to where this is downloaded to? Is this downloaded and installed as a program on the host machine, with a isolated script that controls the container? The download is about 1.3 gigs because it includes .Net Core
In another example, if I download apache2 to run as a web server, does it install it in the default paths on the host system, but every container I want to use taps into that - or does every container contain it's isolated version of apache2?
I ask this because I can't find files that mimic the file size of these programs.
I know they are not complete VM's but where can I find the files associated with a container?
I am using Windows Server 2016 and a Mac since I want to do some trials with containers.
An image is a filesystem
Docker images are encapsulated filesystems. The software and files inside are not being directly installed onto your system.
You can think of a Docker image sort of the way you think of a .zip file. You can download a .zip file from somewhere, and it is a single file. Contained inside it might be one file, or dozens of files, or a nested tree of directories and files. But on your disk, it exists as one file.
A Docker image is similar (conceptually, at least... the details are more complicated).
Image storage
Where images are stored varies by platform. On a Linux system, they are usually under /var/lib/docker. I don't know where they are stored on Windows, but this is a more or less opaque store. Poking around inside will not reveal very much to you anyway.
To see what you have, you should use the docker images command. It will show you the images you have stored locally.
Like I said earlier, each image may consist of multiple layers. By default, that command will only show you the top layer, which is the one you'll care about, to run containers from. Technically, there are other layers, and you can see all of them using docker images -a.
Where is the software installed?
When you download an Apache image, nothing is installed on your system at all. The image file(s) are downloaded and stored. Hiding inside is Apache and everything Apache needs in order to run, but Apache is not installed onto your Windows OS anywhere.
When you want to use Apache, you would run a container. Docker takes the Apache image and, using it as a starting template, creates a running process container, inside of which Apache is running. This is isolated from your operating system. Apache is only running inside of the container.
If you run a second container from the Apache image, you now have two completely separate Apache instances running, each in their own isolated filesystem environment.
Where can I find the files?
If you just want to poke around in the container filesystem, you can start the container in interactive mode, and run a shell instead of whatever it normally runs (like Apache). For instance, if you have an image apache:latest, you can do this:
docker run --rm -it apache:latest bash
This will run an instance of apache:latest, but instead of launching Apache, it will run a bash shell and drop you into it.
The --rm flag is convenient for cases like this. It tells Docker to remove the running container when its process exits. That way for a "just looking at something" container like this one, it cleans up after itself.
The -it is actually two flags. -i is interactive mode, and -t allocates a terminal. This is a common flag to pass when you want to directly interact with the container.
Once inside, you can use the usual commands to look at files and directory listings. Note that many containers are stripped-down, though. You don't always have all of the tools you are used to having. Things like ls in Linux are typically there, but a lot of things will not be.
Simply exit when you're done looking around to exit.
Looking around while the process is running
You can also look at the container while Apache is running. First start it normally.
docker run -d apache:latest
This will return a container ID. You can also get the ID from docker ps. Then you can attach to the container with that ID by executing a shell.
docker exec -it <container_id> bash
Now you're in the container in a shell, but Apache is in there running.

Editing Docker container FS using Atom/Sublime-Text?

I'm running OSX and Docker with the help of boot2docker.
From my understanding boot2docker is a lightweight linux distro that is running the docker containers. I have some Ubuntu containers that I use to run and test projects that should specifically run well on Linux.
However every small code change from my host text editor of choice, requires me to re-build image and re-run the container. Run the app and confirm that the change I made didn't break something.
Is there a way for me to open a Docker container FS folder in a text editor from my host machine? (a.k.a Remote edit?)
Have any of you guys done this? Any ideas will be awesome. I think about setuping SFTP or SSHD on the Docker container, but I would want your opinion?
What I often do is, in development, mount the source code of the application to its usual place in a volume. Then, I set the command (or entrypoint) of the container to a script that launches it in "development mode" (for example, by using nodemon for a node.js application, setting RAILS_ENV=development in Rails, and so on).
Volumes do work on Mac OS X (and I assume Windows) under boot2docker or docker-machine, with the caveat that you need to be working somewhere beneath your home directory.
For a concrete example, here's a repository that I set this up in. The ingredients:
script/dev is my "dev-mode" entrypoint. It launches the main application under nodemon.
When I launch the container, I mount the source directory into the container as a volume and set script/dev as the command. (I'm using docker-compose here to launch and link in an upstream dependency, so I can do everything in one command.)
With those two things in place, I can run docker-compose up, make a source change in whatever editor I choose on my host, save the file, and the service within the container auto-reloads to bring my changes into effect. Presto!

Resources