Frontend development on Docker - docker

I really love the idea to use Docker so on the host computer I don't have any developlent stuff needed, like for frontend node, yarn/npm, for backend like nginx, php, mysql and then all the services like mailhog, redis etc. Just take any computer, install docker and this is like perfect zero config environment to start developing.
Although, I haven't seen too many good examples how to work like that.
And then I start to think, if it is even possible to have environment without dependencies on host, or it is just my crazy ideas. I want to hear some thought, some examples.
At the moment I've built docker-compose file with 3 VueJs frontend projects running my development command command: sh -c 'yarn run serve', but if I check docker stats, I see that ram is 150mb for each container, and cpu usage - nothing. But the issue is that I hear my fans spinning too much when I run docker-comose up -d. I see that docker it self eats ~33% of CPU all the time on host.
Computer specs:
MacBook Pro (15-inch, 2017)
2,8 GHz Quad-Core Intel Core i7
16 GB 2133 MHz LPDDR3
Well, that's about it, maybe you have some good examples or suggestions.
One of the thing I haven't yet tried out is to not to run frontend containers when I run all the services, but spin them up only when neccessary when developing.

I also use docker for development on my Mac, I had the same problems as you with excessive memory consumption. The solution I found was to add the flag :delegated to the volumes.
Read more about volumes here.
Or, you can use NFS:
Set Up Docker For Mac with Native NFS
NFS With Docker on macOS Catalina
Revisiting Docker for Mac's performance with NFS volumes

Related

Docker is very slow

I am running some 30+ containers (Redis, MySQL, RabbitMQ, .Net Core, Flask and others) on Ubuntu 20.04 Server.
Lately — in random intervals — docker builds as well as other docker commands seem to run really slow.
For example, it takes 30 mins at times to build a Flask app which other times build inside 10 seconds. I know its not a cache issue because it would stay stuck on COPY directives that is supposed to copy single .py file.
Same is the case with docker commands like ps, stat, logs
Whenever this happens, I have monitored the resource usage and I have more than 70% RAM and CPU available.
My docker version is Docker version 23.0.1, build a5ee5b1
and containerd version is containerd.io 1.6.16

Poor file transfer performance from container, recent change

I run numerous containers on a single docker instance, a few of which have the need to move files around between local and remove file systems that are mounted on the host as CIFS devices, then presented to the container as volumes. In the past I haven’t had an issue when transferring data between these volumes, however recently I’ve seen the performance drop and transfer rates only running at less than 10 MByte/s when they used to run at around 60 MByte/s.
What are some things I can check in the configuration of either the docker engine on my server, or the container configuration?
I using the docker-ce package on Centos 7 for these containers.

Are there different images for different OS on docker hub

When i run the following command
docker run mongo
It will download the mongo image and run it on container.
I am running Linux on VM.
My OS details are as follows:
NAME="CentOS Linux"
VERSION="7 (Core)"
In case I am using different OS /Mac Machine / Windows, how does docker determine which image to pull. As I understand there is a single image on docker hub for mongo or is it that we can specify a specific image to run based on our OS.
At least we need take care of downloading specific version of mongo when doing installation on our local machine (when not using containers).
How is this taken care of by dockers.
Thanks.
The OS that you are running is for the most part irrelevant when it comes to pulling a docker image. As long as you are running docker (and the versions of docker are a little different from windows to Mac to Linux) on your host, you can pull any image you want. You can pull the same mongo image are run it in any operating system.
The image hides the host operating system making it easy to build an image an deploy pretty much in any machine.
Having said that you may be getting confused because image makers many times use different OS to build their applications. A quick example is people building application using an Ubuntu image but switching to an alpine based image for deployment because that is so much smaller. However, both images will run pretty much anywhere.
Probably you are confused with terms OS and Architecture?
The OS does not really matter, because, as #camba1 mentioned, the Docker daemon handles all that stuff.
What matters, is architecture, because Linux can run on ARM, AMD64, etc.
So, the Docker daemon must know which image is good for current architecture.
Here is a good article regarding this question.

Docker CPU and memory too low

I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.

Why is virtualbox running so slowly under docker?

I'm trying to get virtualbox to run inside docker. I'm well past is it possible to run virtualbox inside a docker container because I can start VBoxManage but it unfortunately spins at 100% CPU (despite working perfectly on the host) for several minutes before finally settling.
This is the Dockerfile I'm running: https://github.com/fommil/freeslick/tree/docker-build which includes a Windows XP virtualbox image built using these instructions https://github.com/fommil/freeslick/tree/docker-base
My host has the dkms running (and virtualbox/that image works), and I'm starting the container under privileged mode (to keep things simple):
docker run -i -t --privileged=true fommil/freeslick:build
But when I start a headless VM with
VBoxManage startvm --type headless MSSQL
(which works when run just on the host) VBox just consumes 100% and the services I expect sometimes never start up (I'm trying to get connections to MSSQL via tsql, see the await-mssql.sh script at the same repo).
To keep things nice and simple, my host and container are both running Debian Jessie (although I eventually want to run Jessie on an externally hosted Ubuntu VPS... let's leave that for another day)
Is there something additional that I need to do in order to be able to run virtualbox under docker?
There is nothing untoward in the log files when run inside the container, except perhaps this (almost 3 minutes to do command 0x30)
00:00:03.176907 Guest Log: BIOS: Booting from Hard Disk...
00:02:48.776941 PIIX3 ATA: execution time for ATA command 0x30 was 164 seconds
there is no such 0x30 command when running on the host.
UPDATE ATA command 0x30 is WRITE SECTORS. I wonder if docker is doing a "copy on write" for my 6GB windows drive, simply as part of the Windows bootup process. Hmm.
answering my own question: it really was the copy-on-write behaviour. My VPS has really slow hard drives that get even slower under docker. Workaround is to try to use a faster volume for my images and create fresh copies in that space before starting the image.

Resources