I am trying to run the create-react-app's development server inside of a docker container and have it recompile and send the changed app code to the client for development purposes, but it isn't picking up the changes from inside of the docker container.
(Of course, I have the working directory of the app as a volume for the container.)
Is there a way to do make this work?
Actually, I found an answer here. Apparently create-react-app uses chokidar to watch file changes, and it has a flag CHOKIDAR_USEPOLLING to use polling to watch for file changes instead. So CHOKIDAR_USEPOLLING=true npm start should fix the problem. As for me, I set CHOKIDAR_USEPOLLING=true in my environment variable for the docker container and just started the container.
Polling, suggested in the other answer, will cause much higher CPU usage and drain your battery quickly. You should not need CHOKIDAR_USEPOLLING=true since file system events should be propagated to the container. Since recently this should work even if your host machine runs Windows: https://docs.docker.com/docker-for-windows/release-notes/#docker-desktop-community-2200 (search for "inotify").
However, when using Docker for Mac, this mechanism seems to be failing sometimes: https://github.com/docker/for-mac/issues/2417#issuecomment-462432314
Restarting the Docker daemon helps in my case.
If your changes are not being picked up, it is probably a problem with the file watching mechanism. A workaround for this issue is to configure polling. You can do that globally as explained by #Javascriptonian, but you can do this also locally via the webpack configuration. This has the benefit of specifying ignored folders (e.g. node_modules) which slow down the watching process (and lead to high CPU usage) when using polling.
In your webpack configuration, add the following configuration:
devServer: {
watchOptions: {
poll: true, // or use an integer for a check every x milliseconds, e.g. poll: 1000
ignored: /node_modules/ // otherwise it takes a lot of time to refresh
}
}
source: documentation webpack watchOptions
If you are having the same issue with nodemon in a back-end Node.js project, you can use the --legacy-watch flag (short -L) which starts polling too.
npm exec nodemon -- --legacy-watch --watch src src/main.js
or in package.json:
"scripts": {
"serve": "nodemon --legacy-watch --watch src src/main.js"
}
documentation: nodemon legacy watch
If you use linux then you don't need to use CHOKIDAR_USEPOLLING=true
With react-script v5.0.0 onward the command is WATCHPACK_POLLING=true instead of CHOKIDAR_USEPOLLING=true
Clear Answer for react-script v5.0.0 onward
1- Create a .env file in the root directory of the project
2- Add the WATCHPACK_POLLING=true to the .env file
3- build new image
4- run new container
5- verify that the changes being detected.
Or you can just add WATCHPACK_POLLING=true to your script for making the container like this
docker run --name my-app -it --rm -v $(pwd)/src:/app/src -p 3000:3000 -e WATCHPACK_POLLING=true myapp
In my case, I was running the docker run command in a Git bash command line (on Windows) and the hot reloading was not working. Using react-script v5.0.0, setting WATCHPACK_POLLING=true in the .env file and running the docker run command in PowerShell worked.
docker run -it --rm -v ${PWD}:/app -v /app/node_modules -p 3000:3000 -e CHOKIDAR_USEPOLLING=true myapp
Related
I'm following a tutorial for docker and docker compose. Although there is a npm install command in Dockerfile (as following), there is a situation that tutor have to run that command manually.
COPY package*.json .
RUN npm install
Because he is mapping the project current directory to the container by volumes as following:(which is like project runs in the container by mapped source code to the host directory)
api:
build: ./backend
ports:
- 3001:3000
environment:
DB_URL: mongodb://db/vidly
volumes:
- ./backend:/app
So npm install command in docker file doesn't make any sense. So he runs this command directly in the root of project.
So, another developer has to run npm install as well, (or if I add a new package, I should do it too) which seems not very developer friendly. Because the purpose of docker is not to do the instructions by yourself. So docker-compose up should do everything. Any idea about this problem would be appreciated.
I agree with #zeitounator, who adds very sensible commentary to your situation and use.
However, if you did want to solve the original problem of running a container that volume mounts in code, and have it run a development server, then you could move the npm command from the COPY directive to the CMD, or even add an entry script to the container that includes the npm call.
That way you could run the container with the volume mount, and the starting process (npm install, npm serve dev, etc) would occur at runtime as opposed to buildtime.
The best solution is as you mention yourself, Vahid, to use a smart Dockerfile that leverages sensible build caching, that allows the application to be built and ran with one command (and no external input). Perhaps you and your tutor can talk about these differences and come to an agreement
I'm using docker compose for running my application in dev. environment.
version: '3.4'
services:
web:
build:
context: .
target: base
ports:
- "5000:5000"
stdin_open: true
tty: true
volumes:
- ./src:/src
command: node src/main/server/index.js
Composer is starting container and I can see logs output from node application. When I press CTR-C - container is stopped and my application is stopped as well.
I would like to have my application to be stopped when I press CTRL-C instead of whole container.
The same behavior when running an app within Windows CMD or Linux shell. For example, to restart an app - press CTRL-C, repeat startup command (node src/main/server/index.js by pressing top arrow key), and press enter.
I was thinking I could use something like this, but it does not work.
command: bash -c "node src/main/server/index.js
I know I can use command below to achieve expected behavior:
docker-compose up -d (to start in detached mode)
docker-compose exec web bash (run interactive shell)
node src/main/server/index.js (start node manually)
But maybe there is a way to start bash interactive bash and run an application in bash using singe command docker-compose up ?
Docker runs a main process in its containers, as such, stopping the main process will also stop the container.
I will attempt to answer your question, but I don't think that you should work like that in a Dev environment.
Answering your question, you can "trap" the container in a main process, then just bash into the container and perform the app start.
In order to trap the container, just change the docker-compose command to:
command: while true; do sleep 1; done;
To get into an interactive bash in the container:
docker exec -it <CONTAINER-ID> bash
And then you can start or stop the node app.
It seems that the problem you are facing is a container taking a lot to start, you should probably reorder your Dockerfile to prevent it from redownloading all dependencies (or other long process) every time a file changes.
You should place your COPY command after all commands that should persist from across builds, and take advantage of docker's image layering.
If you need a "hot reload" feature, you can research Webpack hot reloading.
You would need to bind your host volume to the container's work directory in order to let webpack properly watch the files and reload the app.
I'm trying to launch container using docker-compose services.But unfortunetly, container exited whith code 0.
Containers is build thanks to a repository which is from a .tar.gz archive. This archive is a Centos VM.
I want to create 6 container from the same archive.
Instead of typing 6 times docker command, I would like to create a docker-compose.yml file where i can summarize their command and tag.
I have started to write docker-compose.yml file just for create one container.
Here is my docker-compose.yml :
version: '2'
services:
dvpt:
image: compose:test.1
container_name: cubop1
command: mkdir /root/essai/
tty: true
Do not pay attention to the command as I have just to specify one.
So my question is, why the container is exiting ? Is there a another solution to build these container at the same time ?
Thanks for your responses.
The answer is actually the first comment. I'll explain Miguel's comment a bit.
First, we need to understand that a Docker container runs a single command. The container will be running as long as that process the command started is running. Once the process is completed and exits then the container will stop.
With that understanding, we can make an assumption of what is happening in your case. When you start your dvpt service it runs the command mkdir /root/essai/. That command creates the folder and then exits. At this point, the Docker container is stopped because the process exited (with status 0, indicating that mkdir completed with no error).
run your docker in background with -d
$ docker-compose up -d
and on docker-compose.yml add:
mydocker:
tty: true
You can end with command like tail -f /dev/null
It often works in my docker-compose.yml with
command: tail -f /dev/null
And it is easy to see how I keep the container running.
docker ps
We had a problem where two of the client services(vitejs) exited with code 0. I added the tty: true and it started to work.
dashboard:
tty: true
container_name: dashboard
expose:
- 8001
image: tilt.dev/dashboard
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.entrypoints=web"
- "traefik.http.routers.dashboard-wss.tls=true"
- "traefik.http.routers.dashboard-wss.entrypoints=wss"
One solution is to create a process that doesn't end, an infinite loop or something that can run continuously in the background. This will keep the container opened because the the process won't exit.
This is very much a hack though. I'm still looking for a better solution.
The Zend Server image does something like this. In their .sh script they have a final command:
exec /usr/local/bin/nothing
Which executes a file that continuously runs in the background. I've tried to copy the file contents here but it must be in binary.
EDIT:
You can also end your file with /bin/bash which begins a new terminal process in the container and keeps it from closing.
It can be case that program (from ENTRYPOINT/CMD) run successfully and exited (without demonizing itself). So check your ENTRYPOINT/CMD in Dockerfile.
Create a Dockerfile and add the below line to execute any shell scripts or commands without exit code 0 error. In your case, it should be
RUN mkdir /root/essai/
However, use the below line to execute shell script
RUN /<absolute_path_of_container>/demo.sh
I know i am too late for the answer but few days ago i also ran into the same problem and everything mentioned above not working. The real problem is as mentioned in the above answer that the docker stops after the command exits.
So i did a hack for this
Note i have used Dockerfile for creating image you can do it in your way below is just an example.
I used Supervisor for monitoring the process. As long as supervisor is monitoring the docker container will also not exit.
For those who also ran into the same problem will do the following thin to solve the issue:
#1 Install supervisor in Dockerfile
RUN apt-get install -y supervisor
#2 Create a config file (named supervisord.conf )for supervisor like this
[include]
files = /etc/supervisor/conf.d/*.conf
[program:app]
command=bash
#directory will be any folder where you wnat supervisor to cd before executing.
directory=/project
autostart=true
autorestart=true
startretries=3
#user will be anyone you want but make sure that user will have the enough privilage.
user=root
[supervisord]
nodaemon=true
[supervisorctl]
#3 Copy the supervisor conf file to docker
COPY supervisord.conf /etc/supervisord.conf
#4 Define an entrypoint
ENTRYPOINT ["supervisord","-c","/etc/supervisord.conf"]
Tht`s it now just build the file and run the container. it will keep container running.
Hope it helps you to solve the problem.
And Happy coding :-)
I have a docker-compose-staging.yml file which I am using to define a PHP application. I have defined a data volume container (app) in which my application code lives, and is shared with other containers using volumes_from.
docker-compose-staging.yml:
version: '2'
services:
nginx:
build:
context: ./
dockerfile: docker/staging/nginx/Dockerfile
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build:
context: ./
dockerfile: docker/staging/php/Dockerfile
expose:
- 9000
volumes_from:
- app
app:
build:
context: ./
dockerfile: docker/staging/app/Dockerfile
volumes:
- /var/www/html
entrypoint: /bin/bash
This particular docker-compose-staging.yml is used to deploy the application to a cloud provider (DigitalOcean), and the Dockerfile for the app container has COPY commands which copy over folders from the local directory to the volume defined in the config.
docker/staging/app/Dockerfile:
FROM php:7.1-fpm
COPY ./public /var/www/html/public
COPY ./code /var/www/html/code
This works when I first build and deploy the application. The code in my public and code directories are present and correct on the remote server. I deploy using the following command:
docker-compose -f docker-compose-staging.yml up -d
However, next I try adding a file to my local public directory, then run the following command to rebuild the updated code:
docker-compose -f docker-compose-staging.yml build app
The output from this rebuild suggests that the COPY commands were successful:
Building app
Step 1 : FROM php:7.1-fpm
---> 6ed35665f88f
Step 2 : COPY ./public /var/www/html/public
---> 4df40d48e6a5
Removing intermediate container 7c0fbbb7f8b6
Step 3 : COPY ./code /var/www/html/code
---> 643d8745a479
Removing intermediate container cfb4f1a4f208
Successfully built 643d8745a479
I then deploy using:
docker-compose -f docker-compose-staging.yml up -d
With the following output:
Recreating docker_app_1
Recreating docker_php_1
Recreating docker_nginx_1
However when I log into the remote containers, the file changes are not present.
I'm relatively new to Docker so I'm not sure if I've misunderstood any part of this process! Any guidance would be appreciated.
This is because of cache.
Run,
docker-compose build --no-cache
This will rebuild images without using any cache.
And then,
docker-compose -f docker-compose-staging.yml up -d
I was struggling with the fact that migrations were not detected nor done. Found this thread and noticed that the root cause was, indeed, files not being updated in the container. The force-recreate solution suggested above solved the problem for me, but I find it cumbersome to have to try to remember when to do it and when not. E.g. Vue related files seem to work just fine but Django related files don't.
So I figured why not try adjusting the Docker file to clean up the previous files before the copy:
RUN rm -rf path/to/your/app
COPY . path/to/your/app
Worked like a charm. Now it's part of the build and all you need is run the docker-compose up -d --build again. Files are up to date and you can run make migrations and migrate against your containers.
I had similar issue if not same while working on dotnet core application.
What I was trying to do was rebuild my application and get it update my docker image so that I can see my changes reflected in the containerized copy.
So I got going by removing the underlying image generated by docker-compose up using the command to get my changes reflected:
docker rmi *[imageId]*
I believe there should be support for this in docker-compose but this was enough for my need at the moment.
Just leaving this here for when I come back to this page in two weeks.
You may not want to use docker system prune -f in this block.
docker-compose down --rmi all -v \
&& docker-compose build --no-cache \
&& docker-compose -f docker-compose-staging.yml up -d --force-recreate
I had the same issue because of shared volumes. For me the solution was to remove shared container using this command:
docker volume rm [VOLUME_ID]
Volume id or name you can find in "Mount" section using this command:
docker inspect [CONTAINER_ID]
None of the above solutions worked for me, but what did finally work was the following steps:
Copy/Move file outside of docker app folder
Delete File you want to update
Rebuild the docker img without updated file
Move copied file back into docker app folder
Rebuild again the docker image
Now the image will contain the updates to the file.
I'm relatively new to Docker myself and found this thread after experiencing a similar issue with an updated YAML file not seeming to be copied into a rebuilt container, despite having turned off caching.
My build process differs slightly as I use Docker Hub's GitHub integration for automating image builds when new commits to the master branch are made. The build happens on Docker's servers rather than the locally built and pushed container image workflow.
What ended up working for me was to do a docker-compose pull to bring down into my local environment the most up-to-date versions of the containers defined in my .env file. Not sure if the pull command defers from the up command with a --force-recreate flag set, but I figured I'd share anyway in case it might help someone.
I'd also note that this process allowed me to turn auto-caching back on because the edited file was actually being detected by the Docker build process. I just wasn't seeing it because I was still running docker-compose up on outdated image versions locally.
I am not sure it is caching, because (a) it is usually noted in the build output, whether cache was used or not and (b) 'build' should sense the changed content in your directory and nullify the cache.
I would try to bring up the container on the same machine used to build it to see if that is updated or not. if it is, the changed image is not propagated. I do not see any version used in your files (build -t XXXX:0.1 or build -t XXXX:latest) so it might be that your staging machine uses a stale image. Or, are you pushing the new image so the staging server will pull it from somewhere?
You are trying to update an existing volume with the contents from a new image, that does not work.
https://docs.docker.com/engine/tutorials/dockervolumes/#/data-volumes
States:
Changes to a data volume will not be included when you update an image.
I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp