Hot reload not working with webpack-dev-server and docker - docker

Using Ubuntu Linux with docker installed. No VM.
I have build a docker image with a vuejs application. To enable hot reload I start the docker container with:
docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
It starts up fine and I can access it from my host browser on localhost:8081. But when I make changes to the source files and save those changes they are not reflected in my browser before I press F5 (hot reload does not work).
Some details below:
package.json
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
build/webpack.dev.conf.js
devServer: {
clientLogLevel: 'warning',
...
hot: true,
...
watchOptions: {
//poll: config.dev.poll,
//aggregateTimeout: 500, // delay before reloading
poll: 100 // enable polling since fsevents are not supported in docker
}
Tried to modify the watchOptions but it has no effect.
EDIT:
Based on below answer I have tried to pass: CHOKIDAR_USEPOLLING=true as an environment variable to docker run:
docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -e "CHOKIDAR_USEPOLLING=true" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
But it has not effect - still not able to hot reload my changes. Also in the provided link it says:
Update/Clarification: This problem only occurs when running your
Docker engine inside a VM. If you are on Linux for both Docker and for
coding you do not have this problem.
So don't think the answer applies to my setup - I am running Ubuntu Linux on my machine where I have installed docker. So no VM setup.
Another update - based on the comment below on changing the port mapping:
# Hot reload works!
docker run -it -p 8080:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
# Hot reload fails!
#docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
So if I port map to 8080:8080 instead of 8081:8080 hot reload works! Notice the application comes up in both cases when I access it on my host browser on localhost on the before mentioned ports. Its just that hot reload only works when I map the application to 8080 on my host.
But why??
Now if I do:
PORT='8081'
docker run -it -p "${PORT}:${PORT}" -e "HOST=0.0.0.0" -e "PORT=${PORT}" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
Hot reload of course works. But still not sure why I cannot map internal container port 8080 to 8081 externally on the host.
Btw; I don't see the problem at all if I use vue-cli-service serve instead - everything works out of the box.

I am not a VueJS user at all, never worked with it, but I use Docker heavily for my development workflow, and in the past I experienced a similar issue.
In my case the Javascript that was sent to the browser was trying to connect with the internal port of the docker container 8080, but once the mapped one for the host was 8081, the JS in the browser was not able to reach 8080 inside the docker container, therefore hot reload was not working.
So it seems to me that you have the same scenario as me, thus you need to configure in your VueJS app the hot reload to listen in the same port you want to use in the host, or just use the same port for both as you already concluded that it works.

If watchOptions doesnt work, you can try out the other option:
environment:
- CHOKIDAR_USEPOLLING=true
As per docs here:
“If watching does not work for you, try out this option. Watching does not work with NFS and machines in VirtualBox.”
Reference:
https://daten-und-bass.io/blog/enabling-hot-reloading-with-vuejs-and-vue-cli-in-docker/

It's been asked a long time ago, but I had the same problem and then noticed there's a sockPort within devServer config object that let's you set the port used by the websocket connection to communicate with the server for live/hot-reloading purposes.
What I did is set this option via an environment variable and it worked just fine when accessing the dev server from outside the container.

Related

Running plex as a docker container, cannot access web UI

I am trying to set up my Plex server using docker. I have followed the steps on the LinuxServer.io docker page. When I run the docker command, it says that it is running find and I get no errors. However, when I try to access the web UI through localhost:32400/web, all I get is "Problem loading page"
I am using docker for windows with Linux containers.
docker command:
docker run -d --name=plex --net=host -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -e VERSION=docker -v D:\plex\config:/config -v D:\media\tvseries:/data/tvshows -v D:\media\movies:/data/movies -v D:\media\transcode:/transcode --restart unless-stopped linuxserver/plex
when I use docker ps the plex container looks like it is running.
I am new to docker. I have looked around and cannot find why I cannot access the UI.
Please me know if you require additional information.
docker inspect:
"NetworkMode": "host",
"PortBindings": {
"32400/tcp": [
{
"HostIp": "",
"HostPort": "32400"
}
]
},
please let me know if you require more information
--net=host not work for docker-for-windows.
Reasons:
Linux container need to share a linux host' kernel.
In order to achieve this, when docker for windows run a linux container, it will had to setup a hyper-v machine. If you open the Hyper-V manager, you will see a MobyLinuxVM running.
So, when you use --net=host, the container will just use the network of MobyLinuxVM, not the windows. So, localhost will not work.
Suggestion:
For your scenario, I suggest you to remove --net=host, add port mapping in command line:
docker run -d --name=plex -p 32400:32400 -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -e VERSION=docker -v D:\plex\config:/config -v D:\media\tvseries:/data/tvshows -v D:\media\movies:/data/movies -v D:\media\transcode:/transcode --restart unless-stopped linuxserver/plex
Then, magic will happen here, docker for windows will map the windows's 32400 port to your container using windows route mechanism. And you can visit container's service from windows.

Can't access webserver of airflow after run the container

I pulled the latest version of airflow image from docker hub.
apache/airflow.
And I tried to run a container base on this image.
docker run -d -p 127.0.0.1:5000:5000 apache/airflow webserver
The container is running and the status of port is fine. But I still can't access the airflow webserver from my browser.
This site can’t be reached.
127.0.0.1 refused to connect.
After few minutes, the container will stop automatically.
Is there anyone could advise?
I don't have experience with airflow, but this is how you fix this image to run:
First of all you have to overwrite the entrypoint because the existing one doesn't help a lot. From what I understand this image needs 2 steps in order to run: initdb and webserver. For this reason the existing entrypoint is not useful.
Run:
docker run -p 5000:8080 --entrypoint /bin/bash -ti apache/airflow
This will open a shell inside a running container. Also note that I mapped port 8080 inside the container.
Then inside the container run:
airflow db init
airflow webserver -p 8080
Note that in older versions of airflow, the command to initialize the database is airflow initdb, instead of airflow db init.
Open a browser and navigate to http://localhost:5000
When you close the container your work is gone thou ;)
Another thing you can do is put the 2 airflow commands in a bash script and map that script inside the container and use it as entrypoint. Something like this:
docker run -p 5000:8080 -v $(pwd)/startup.sh:/opt/airflow/startup.sh --entrypoint /opt/airflow/startup.sh -d --name airflow apache/airflow
You should make startup.sh executable before running this.
Let me know if you run into issues.

How to debug persistent data volume mount for Docker Odoo container?

I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.

How to start nginx in Docker on Windows

I am using Windows 10 and I have installed Docker and pulled nginx:
docker pull nginx
I started nginx with this command:
docker run -dit --rm --name nginx -p 9001:80 nginx
And simple page is available on localhost:9001.
I would like to pass nginx.conf file to nginx. Also, I would like to give it a folder root, so that on localhost:9001 I see static page D:/nginx/website_files/index.html. In folder website_files there are also other static pages.
How to pass nginx.conf and folder path to nginx in Docker on Windows?
I started using Kitematic and pulled hello-world-nginx. With it I was able to browse files by clicking on Volumes -> /website_files. On path that opens, other static files can be added. After that nginx can be restarted and it increments port by 1. Port number can be seen with docker ps.
To change nginx config file, after starting nginx I run this command docker cp D:/nginx/multi.conf b3375f37a95c:/etc/nginx/nginx.conf where b3375f37a95c is container id obtained from docker ps command. After that nginx should be restarted from Kitematic.
If you only want to edit nginx.conf instead of completely changing it, you can first get current conf file with docker cp b3375f37a95c:/etc/nginx/nginx.conf D:/nginx/multi.conf, edit multi.conf and than copy it back as before.
You can use host volume mapping
-v <host-directory>:<container-path>
for example:
docker run -dit --rm -v d:/data:/data -p 9001:80 nginx /bin/sh
Try with this in PS :
PS C:\> docker run --name myNGinX -p 80:80 -p 443:443 -v C:\Docker\Volumes\myNGinX\www\:/etc/nginx/www/:ro -v C:\Docker\Volumes\myNGinX\nginx.conf:/etc/nginx/conf.d/:ro -d nginx
Late to the answer-party, and shameless self-promotion, but I created a repo using Docker-compose having an Nginx proxy server and 2 other websites all in Containers.
Check it out here

Link docker containers (Drupal and MariaDB)

To start I built a docker container from the MariaDB docker image.
After that I loaded a database dumpfile in the running container.
[MariaDB status][1]
Everything goes fine.
When I want to run/link the Drupal image:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -d drupal
I can reach the drupal installation page, but when I want to load the database I always have the same errors:
-host, pass or dbname is wrong.
But I'm pretty sure my credentials are right.
It seems that my drupal container can't find the mariadb image.
Docker links is a deprecated feature and should be avoided: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
I assume you have a container named mariadbdocker running.
If you gain bash access inside drupaldocker container, you should be able to ping mariadb alias like this:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -it drupal /bin/bash
If ping succeeds then you probably still have credentials issue.

Resources