Docker is extremely slow when running Laravel on Nginx container wsl2 - docker

I've updated Windows 10 to 2004 latest version, installed wsl2 and updated it, installed docker, and ubuntu.
When I create a simple index.php file with "Hello World" it's working perfectly ( response: 100-400ms ) but when I added my Laravel project it becomes miserable as it loads for 7sec before performing the request and the response is 4 - 7 secondsšŸ˜¢, even though PHPMyAdmin is running very smoothly ( response: 1 - 2 seconds ).
my docker-compose.yml file:
version: '3.8'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8080:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
- phpmyadmin
networks:
- laravel
mysql:
image: mysql:latest
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
depends_on:
- mysql
ports:
- 8081:80
environment:
PMA_HOST: mysql
PMA_ARBITRARY: 1
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
networks:
- laravel
npm:
image: node:latest
container_name: npm
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
entrypoint: ['npm']
artisan:
build:
context: .
dockerfile: Dockerfile
container_name: artisan
volumes:
- ./src:/var/www/html
depends_on:
- mysql
working_dir: /var/www/html
entrypoint: ['php', '/var/www/html/artisan']
networks:
- laravel
I've been trying to fix this issue for 2 days but couldn't find the answer.
Thanks

It looks like you are mounting your Laravel project in your container. This could result in very poor file I/O if you are mounting these files from your Windows environment to WSL 2, since WSL 2 currently has a lot of problems accessing files that are on the Windows environment. This I/O issue exists as of July 2020, you can find the ongoing status of the issue on Github here.
There are three possible solutions I can think of that will resolve this issue for now.
Disable WSL 2 based engine for docker until the issue is resolved
Since this issue only occurs when WSL 2 tries to access the Windows filesystem, you could choose to disable WSL 2 docker integration and run your containers on your Windows environment instead. You can find the option to disable it in the UI of Docker Desktop here:
Store your project in the Linux filesystem of WSL 2
Again, since this issue occurs when WSL 2 tries to access the mount points of the Windows filesystem under /mnt, you could choose to store your project onto the Linux filesystem of WSL 2 instead.
Build your own Dockerfiles
You could choose to create your own Dockerfiles and instead of mounting your project, you can COPY the desired directories into the docker images instead. This would result in poor build performance, since WSL 2 will still have to access your Windows filesystem in order to build these docker images, but the runtime performance will be much better, since it won't have to retrieve these files from the Windows environment everytime.

You just move all source project to folder
\\wsl$\Ubuntu-20.04\home\<User Name>\<Project Name>
The speed will be to very fast such run on Linux Native
Before
After

I experienced the same problem with mysql database requests and responses taking about 8 to 10 seconds for each request/response. The problem definitely relates to the mounting of files between the Windows file system and Windows WSL2. After many days of trying to resolve this issue, I found this post:
https://www.createit.com/blog/slow-docker-on-windows-wsl2-fast-and-easy-fix-to-improve-performance/
After implementing the steps specified in the post, it totally eliminated the problem, reducing database requests/responses to milliseconds. Hopefully this will assist someone experiencing the same issue.

Ok, so i got an interesting fact :))
Running docker on windows without WSL2.
A request has TTFB 5.41s. This is the index.php file. I used die() to check where the time is bigger and i found that if i use die() after terminate, the TTFB becomes ~2.5s.
<?php
/**
* Laravel - A PHP Framework For Web Artisans
*
* #package Laravel
* #author Taylor Otwell <taylor#laravel.com>
*/
define('LARAVEL_START', microtime(true));
require __DIR__.'/../../application/vendor/autoload.php';
$app = require_once __DIR__.'/../../application/bootstrap/app.php';
#die(); <-- TTFB 1.72s
$kernel = $app->make(Illuminate\Contracts\Http\Kernel::class);
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
$response->send();
#die(); <-- TTFB 2.67s
$kernel->terminate($request, $response);
#die(); <-- TTFB 2.74s
#if there is no die in the file then TTFB is ~6s

You are running your project on the /mnt/xxx folder, isn't it?
This is because wsl2 filesystem performance is much slower than wsl1 in /mnt.
If your want a very short solution, it is here. Works on Ubuntu 18.04 and Debian from windows store:
Go to the docker settings and turn on Expose daemon on tcp://localhost:2375 without TLS and turn off Use the WSL 2 based engine.
Run this command:
clear && sudo apt-get update && \
sudo curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh && sudo usermod -aG docker $USER && \
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && sudo chmod +x /usr/local/bin/docker-compose && \
echo "export PATH=\"$PATH:$HOME/.local/bin\"" >> ~/.profile && source ~/.profile && \
echo "export DOCKER_HOST=tcp://localhost:2375" >> ~/.bashrc && source ~/.bashrc && \
printf '[automount]\nroot = /\noptions = metadata' | sudo tee -a /etc/wsl.conf
I wrote the instuction how to integrate the Docker Desktop with WSL 1:
https://github.com/CaliforniaMountainSnake/wsl-1-docker-integration

I was facing the same issue with Laravel/Docker/Nginx on Windows 11.
I couldn't disable "Use the WSL 2 based engine", because it was grey out, even after installing Hyper-v on Windows 11 Home (tweak).
Here is the best solution I found:
1 . Copy your project into your WSL folder
Open Windows explorer and type the following address :
\wsl.localhost\
Select your WSL instance, and then you can copy your project into /home/yourUsername/
The full URL will be something like :
\wsl.localhost\ubuntu\home\username\yourProject
2.Start docker containers
Just open a terminal from this folder, and start your containers
eg: docker-compose up -d
3.Visual Studio Code
to open the project folder from visual studio :
CTRL + P, >Remote-WSL : Open folder in WSL...
to open the project folder from the command line :
code --remote wsl+Ubuntu /home/username/yourProject

You can exclude vendor folder from compose file
like
volumes:
- ./www:/var/www
- vendor:/var/www/vendor

This is really sketchy way of improving the speed but here is goes:
Problem
The speed of loading composer dependencies from a vendor dir, which is mapped from project root on Windows to a docker container via WSL 2, is currently reaaaaaly slow.
Solution
Copy vendor dir to docker image and use it as opposed to the mapped one in project root.
The project structure using MySQL database and Apache PHP 7.4 with composer auto-loading looks like this:
db
- init.sql
dev
- db
- Dockerfile
- data.sql
- www
- Dockerfile
- vendor-override.php
- docker-compose.yaml
src
- ...
vendor
- ...
composer.json
index.php
...
The idea here is to keep dev stuff separated from the main root dir.
dev/docker-compose.yaml
version: '3.8'
services:
test-db:
build:
context: ../
dockerfile: dev/db/Dockerfile
test-www:
build:
context: ../
dockerfile: dev/www/Dockerfile
ports:
- {insert_random_port_here}:80
volumes:
- ../:/var/www/html
Here we have two services, one for MySQL database and one for Apache with PHP, which maps web root /var/www/html to our project root. This enables Apache to see the project source files (src and index.php).
dev/db/Dockerfile
FROM mysql:5.7.24
# Add initialize script (adding 0 in front of it, makes sure it is executed first as the scripts are loaded alphabetically)
ADD db/init.sql /docker-entrypoint-initdb.d/0init.sql
# Add test data (adding 99 infront of it, makes sure it is executed last)
ADD dev/db/data.sql /docker-entrypoint-initdb.d/99data.sql
dev/www/Dockerfile
FROM php:7.4.0-apache-buster
# Install PHP extensions and dependencies required by them
RUN apt-get update -y & \
apt-get install -y libzip-dev libpng-dev libssl-dev libxml2-dev libcurl4-openssl-dev & \
docker-php-ext-install gd json pdo pdo_mysql mysqli ftp simplexml curl
# Enable apache mods and .htaccess files
RUN a2enmod rewrite & \
sed -e '/<Directory \/var\/www\/>/,/<\/Directory>/s/AllowOverride None/AllowOverride All/' -i /etc/apache2/apache2.conf
# Add composer to improve loading speed since its located inside linux
ADD vendor /var/www/vendor
ADD dev/www/vendor-override.php /var/www/
RUN chmod -R 777 /var/www & \
mkdir /var/www/html/src & \
ln -s /var/www/html/src /var/www/src
# Expose html dir for easier deployments
VOLUME /var/www/html
I'm using official Apache buster image with PHP 7.4.
Here we copy vendor dir and vendor-override.php to a dir above the webroot (/var/www) so it doesn't interfere with project root.
ADD vendor /var/www/vendor
ADD dev/www/vendor-override.php /var/www/
Next we set the read write execute permissions for everybody so Apache can read it. This necessary because its outside the webroot.
chmod -R 777 /var/www
Now the trick here is to make sure the composer auto-loads classes from src dir. This is solved by creating a link from /var/www/src/ to /var/www/html/src which is in our project root.
mkdir /var/www/html/src
ln -s /var/www/html/src /var/www/src
dev/www/vendor-override.php
# Override default composer dependencies to be loaded from inside docker. This is used because
# loading files over mapped volumes is really slow on WSL 2.
require_once "/var/www/vendor/autoload.php";
Simply use the vendor dir inside docker image.
index.php
$fixFile = "../vendor-override.php";
if (file_exists($fixFile))
require_once $fixFile;
else
require_once "vendor/autoload.php";
...
If vendor-override.php file is detected it is used instead of the one from the project root. This makes sure the index.php loads dir inside of the docker image which is way faster.
composer.json
{
"autoload": {
"psr-4": {
"Namespace\\": ["src"]
}
},
...
}
Simple autoloading setup maps "Namespace" to src dir in project root.
Key things to note
index.php loads vendor-override.php instead of vendor from proect root
PSR-4 autoloading is solved by linking
Downside
The downside is you have to build docker image every time you update dependencies.

I just enabled Hyper-V from powershell
DISM /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V
Restarted and now it works reasonably good. I have not changed anything else whatsoever.
See more info about how to enable Hyper-V Here

By default, opcache is not enabled in docker container php:8.0-apache.
Add in dockerfile:
RUN docker-php-ext-install opcache
COPY ./opcache.ini /usr/local/etc/php/conf.d/opcache.ini
Create file opcache.ini:
[opcache]
opcache.enable=1
; 0 means it will check on every request
; 0 is irrelevant if opcache.validate_timestamps=0 which is desirable in production
opcache.revalidate_freq=0
opcache.validate_timestamps=1
opcache.max_accelerated_files=10000
opcache.memory_consumption=192
opcache.max_wasted_percentage=10
opcache.interned_strings_buffer=16
opcache.fast_shutdown=1

Related

docker build issue with docker file: Jobber not working after adding a package

I'm still learning all this stuff around docker. Now I have an issue that I don't understand. Maybe one of you can explain to me, what I did wrong.
I want to schedule some SQL scripts with jobber. Therefore I need to add the MYSQL-Client package into a jobber image.
Docker file:
FROM jobber:latest
User root
COPY install-packages.sh .
RUN chmod +x ./install-packages.sh
RUN ./install-packages.sh
install-packages.sh
apk update
apk upgrade
apk add mysql-client
rm -rf /var/cache/apk/*
My docker build command:
docker build . -t jobbermysql:20210110
Docker-Compose file to run the container:
version: '3'
services:
jobbermysql:
image: jobbermysql:20210110
container_name: jobbermysqlcompose
restart: always
volumes:
- /home/docker/datapath/jobber/jobberuser:/home/jobberuser
The docker build works fine. but when I run an instance of my image jobbermysql:20210110 jobber always reports:
jobbermysqlcompose | User root doesn't own jobfile
If I try to get some additional information / jobs via direct access to the running container (e.g. a jobber init command to understand the issues)
/home/jobberuser # jobber init
Jobber doesn't seem to be running for user root.
(No socket at /var/jobber/0/cmd.sock.): stat /var/jobber/0/cmd.sock: no such file or directory
If I restart the ā€œoldā€ default jobber version (without my modification of mysql-client) itā€™s working fine. And they both use the same volume mapping. So I think I have destroyed something in the docker build process.
version: '3'
services:
jobbermysql:
image: jobber:latest
container_name: jobbermysqlcompose
restart: always
volumes:
- /home/docker/datapath/jobber/jobberuser:/home/jobberuser
Can somebody give me an hint?
Many Thanks and Kind regads
Holger
Jobber itself seems to be quite specific about the required file permissions of the .jobber file. Jobber's documentation states the following:
Jobfiles must be owned by the user who owns the home directory that
contains them, and their permissions must not allow the owning group
or any other users to write to them (i.e., the permission bits must
exclude 022)
Therefore, we need to set the file permissions of the mounted file accordingly. As the official docker is running as USER jobberuser, we need to set the file permissions accordingly:
chown 1000:1000 jobber-jobs.yml
chmod 600 jobber-jobs.yml
In your case, you switched to USER root, but did not switch back after installing the packages. The following Dockerfile & docker-compose.yml did work for me:
FROM jobber
USER root
RUN apk add --no-cache mysql-client
USER jobberuser
version: '3'
services:
cron:
image: mysql-jobber
build: ./build
volumes:
- ./jobber-jobs.yml:/home/jobberuser/.jobber:ro

Starting firefox insde container makes new window in host firefox

I am currently trying to make an docker container that connect to a VPN (via openforivpn) and the open a Firefox instance to use with that connection. When there is no Firefox running in my host, everything works fine, the container start the VPN connection and the open the Firefox application connected to my X server. But if I have my host Firefox is running, when I start the container it opens a new window on my host Firefox and the exit the container with the message:
feulo#branca:~/vpen-test$ docker-compose up
Recreating 07_complex_compose_openfortivpn_1 ... done
Attaching to 07_complex_compose_openfortivpn_1
07_complex_compose_openfortivpn_1 exited with code 0
Anyone kwonws how to fix it? Thanks for the helping
These are de Dockerfile and docker-compose.yml files
Dockerfile
# Use an official Debian Slim image
FROM debian:buster-slim
# Install needed packages
RUN apt update \
&& DEBIAN_FRONTEND="noninteractive" apt -y install dbus ppp openfortivpn iceweasel
docker-compose.yml
version: '3.2'
services:
openfortivpn:
working_dir: /workdir
build: .
privileged: true
devices:
- /dev/snd
volumes:
- .:/workdir
- /tmp/.X11-unix:/tmp/.X11-unix
environment:
- DISPLAY=unix$DISPLAY
command: sh entrypoint.sh
entrypoint.sh
openfortivpn -c dti.txt &
firefox
found the solution in Docker with shared X11 socket: Why can it "start" Firefox outside of the container?
just need to add -new-instance to Firefox

docker-compose volume is empty even from initialize

I try to create a docker-compose image for different website.
Everything is working fine except for my volumes.
Here is an exemple of the docker-compose.yml:
version: '2'
services:
website:
build:
context: ./dockerfiles/
args:
MYSQL_ROOT_PASSWORD: mysqlp#ssword
volumes:
- ./logs:/var/log
- ./html:/var/www
- ./nginx:/etc/nginx
- ./mysql-data:/var/lib/mysql
ports:
- "8082:80"
- "3307:3306"
Anf here is my Dockerfile:
FROM php:5.6-fpm
ARG MYSQL_ROOT_PASSWORD
RUN export DEBIAN_FRONTEND=noninteractive; \
echo mysql-server mysql-server/root_password password $MYSQL_ROOT_PASSWORD | debconf-set-selections; \
echo mysql-server mysql-server/root_password_again password $MYSQL_ROOT_PASSWORD | debconf-set-selections;
RUN apt-get update && apt-get install -y -q mysql-server php5-mysql nginx wget
EXPOSE 80 3306
VOLUME ["/var/www", "/etc/nginx", "/var/lib/mysql", "/var/log"]
Everything is working well, expect that all my folders are empty into my host volumes. I want to see the nginx conf and mysql data into my folders.
What am I doing wrong?
EDIT 1 :
Actually the problem is that I want docker-compose to create the volume in my docker directory if it not exist, and to use this volume if it exist, as it is explain in https://stackoverflow.com/a/39181484 . But it doesn't seems to work.
The problem is that you're expecting files from the Container to be mounted on your host.
This is not the way it works: it's the other way around:
Docker mounts your host folder in the container folder you specify.
If you go inside the container, you will see that where there were supposed to be the init files, there will be nothing (or whatever was in your host folder(s)), and you can write a file in the folder and it will show up on your host.
Your best bet to get the init files and modify them for your container is to:
Create a container without mounting the folders (original container data will be there)
Run the container (the container will have the files in the right place from the installation of nginx etc...) docker run <image>
Copy the files out of the container with docker cp <container>:<container_folder>/* <host_folder>
Now you have the 'original' files from the container init on your host.
Modify the files as needed for the container.
Run the container mounting your host folders with the new files in them.
Notes:
You might want to go inside the container with shell (docker run -it <image> /bin/sh) and zip up all the folders to make sure you got everything if there are nested folders, then docker cp ... the zip file
Also, be careful about filesystem case sensitivity: on linux files are case sensitive. On Mac OS X, they're not. So if you have Init.conf and init.conf in the same folder, they will collide when you copy them to a Mac OS X host.

Building an web app which can perform npm tasks

Before I post any configuration, I try to explain what I would like to archive and would like to mention, that Iā€™m new to docker.
To make path conversations easier, let's assume we talk about the project "Docker me up!" and it's located in X:\docker-projects\docker-me-up\.
Goal:
I would like to run multiple nginx project with different content, each project represents a dedicated build. During development [docker-compose up -d] a container should get updated instantly; which works fine.
The tricky part is, that I want to outsource npm [http://gruntjs.com] from my host directly into the container/image, so Iā€™m able to debug and develop wherever I am, by just installing docker. Therefore, npm must be installed in a ā€œserviceā€ and a watcher needs to be initialized.
Each project is encapsulated in its own folder on the host/build in docker and should not be have any knowledge of anything else but itself.
My solution:
I have tried many different versions, with ā€œvolumes_fromā€ etc. but I decided to show you this, because itā€™s minified but still complete.
Docker-compose.yml
version: '2'
services:
web:
image: nginx
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
links:
- php
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
app:
build: .
volumes:
- ./assets:/website/assets
- ./config:/website/config:ro
- ./www:/website/www
Dockerfile
FROM debian:jessie-slim
RUN apt-get update && apt-get install -y \
npm
RUN gem update --system
RUN npm install -g grunt-cli grunt-contrib-watch grunt-babel babel-preset-es2015
RUN mkdir -p /website/{assets,assets/es6,config,www,www/js,www/css}
VOLUME /website
WORKDIR /website
Problem:
As you can see, the ā€œdataā€ service contains npm and should be able to execute a npm command. But, if I run docker-compose up -d everything works. I can edit the page content, work with it, etc. But the data container is not running and because of that cannot perform any npm command. Unless I have a huge logic error; which is quite possible ;-)
Environment:
Windows 10 Pro [up2date]
Shared drive for docker is used
Docker version 1.12.3, build 6b644ec
docker-machine version 0.8.2, build e18a919
docker-compose version 1.8.1, build 004ddae
After you call docker-compose up, you can get an interactive shell for your app container with:
docker-compose run app
You can also run one-off commands with:
docker-compose run app [command]
The reason your app container is not running after docker-compose up completes is that your Dockerfile does not define a service. For app to run as a service, you would need to keep a thread running in the foreground of the container by adding something like:
CMD ./run-my-service
to the end of your Dockerfile.

docker-compose with a war via volume fails on digital ocean debian but not other debian distros (my home box, work box as well)

on digitalocean:
if i use docker compose with a tomcat container that has the war i am trying to use already in webapps via the tomcat container, compose works...
Dockerfile for Tomcat container with built in war (works):
FROM clegge/java7
MAINTAINER christopher.j.legge#gmail.com
RUN wget http://www.eu.apache.org/dist/tomcat/tomcat-7/v7.0.65/bin/apache-tomcat-7.0.65.tar.gz
RUN tar -xzf apache-tomcat-7.0.65.tar.gz
RUN mv apache-tomcat-7.0.65 /opt/tomcat7
RUN rm apache-tomcat-7.0.65.tar.gz
RUN echo "export CATALINA_HOME=\"/opt/tomcat7\"" >> ~/.bashrc
RUN sed -i '/<\/tomcat-users>/ i\<user username="test" password="test~" roles="admin,manager-gui,manager-status"/>' /opt/tomcat7/conf/tomcat-users.xml
VOLUME /opt/tomcat7/webapps
ADD app.war /opt/tomcat7/webapps/app.war
EXPOSE 8080
CMD /opt/tomcat7/bin/startup.sh && tail -f /opt/tomcat7/logs/catalina.out
if i use that container with docker compose, everything works great!
if i try to push the war file in via my docker-compose yml the contianer goes up and never inflates the war...
tomcat:
image: tomcat:8.0
ports:
- "8080:8080"
volumes:
- base-0.2.war:/usr/local/tomcat/webapps/base.war
links:
- postgres
postgres:
image: clegge/postgres
ports:
- "5432:5432"
environment:
- DB_USER_NAME=test_crud
- DB_PASSWORD=test_crud
- DB_NAME=base_db
no error in the logs...
this also fails on my godaddy box... I have tried every flavor of linux on digital ocean... nothing works... i think it is a problem because the digital ocean instance is a container itself... any thoughts?

Resources