I'm having trouble creating docker react app - docker

I'm trying to get create-react-app working on my Ubuntu wsl2 without having to install node, but rather using Docker. I have the following.
so I have this utility.docker-compose.yml file
version: '3.7'
services:
npm:
build:
context: .
dockerfile: utility.Dockerfile
volumes:
- ./:/app
stdin_open: true
tty: true
utility.Dockerfile
FROM node:18-alpine
WORKDIR /app
ENTRYPOINT [ "npm" ]
I also have a shell script
docker-compose -f utility.docker-compose.yml run --rm npm "init" "react-app" "my-app"
This is under my directory \wsl$\Ubuntu-20.04\home\username\Projects
I am able to install npm packages by modifying the shell script to something like "install" "axios". That works, but I haven't had luck trying to create a react app that.
I keep getting this error "sh: create-react-app: Permission denied" I tried changing the permissions and ownership but nothing works.
sh: create-react-app: Permission denied
npm ERR! code 127
npm ERR! path /app
npm ERR! command failed
npm ERR! command sh -c -- create-react-app my-app
But when I try it on my Windows10, it works easily. Any ideas on how to fix this issue? I kinda don't like to keep moving my files from Windows 10 to my WSl2 Ubuntu instance.
I prefer using wsl2 cause when I make a change in react, I can see the changes reflect right away as opposed to on Windows 10 I have to rebuild the containers.

So, I ended up finding another stackoverflow question that worked for my situation. I added the command below is what worked for me.
docker run -it -p 8080:80 -v $PWD:/app -w /app node:12-slim bash
The link below is the solution that worked for me.
create-react-app error in nodejs docker environment

Related

Why modules are not installed by only binding volume in docker-compose

When I tried docker-compose build and docker-compose up -d
I suffered api-server container didn't start.
I tried
docker logs api-server
yarn run v1.22.5
$ nest start --watch
/bin/sh: nest: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
It seems nest packages didn't installed. because package.json was not copied to container from host.
But in my opinion,by volume was binded by docker-compose.yml, Therefore the command yarn install should refer to the - ./api:/src.
Why do we need to COPY files to container ?
Why only the volume binding doesn't work well ?
If someone has opinion,please let me know.
Thanks
The following is dockerfile
FROM node:alpine
WORKDIR /src
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev
Following is
docker-compose.yml
version: '3'
services:
api-server:
build: ./api
links:
- 'db'
ports:
- '3000:3000'
volumes:
- ./api:/src
- ./src/node_modules
tty: true
container_name: api-server
Volumes are mounted at runtime, not at build time, therefore in your case, you should copy the package.json prior to installing dependencies, and running any command that needs these dependencies.
Some references:
Docker build using volumes at build time
Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?

Input/Output Error on Create-React-App in Docker

I'm trying to dockerize my create-react-app development environment and preserving hot reloads. According to most guides (and this guy), the most direct way is docker run -p 3000:3000 -v "$(pwd):/var/www" -w "/var/www" node npm start in the project folder.
However, I'm getting this error instead:
$ docker run -p 3000:3000 -v "$(pwd):/var/www" -w "/var/www" node npm start
> my-app#0.1.0 start /var/www
> react-scripts start
sh: 1: react-scripts: Input/output error
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! my-app#0.1.0 start: `react-scripts start`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the my-app#0.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2020-04-02T06_55_22_257Z-debug.log
I'm running on Windows. I believe mounting the volume might have some permission issues leading to the input/output error, but testing various settings didn't work out. I'm honestly stumped. All I want is to run my app in Docker with hot reload for development.
As it turns out, setting up create-react-app in docker takes a little more work.
The primary issue is that mounted volumes are not available in the build step, so when node npm start runs the mounted project files technically don't exist yet.
As such, you need to copy over and install the project first to let it run the first time before the volume mounts. Hot reloading works normally afterwards.
Here's my final working setup:
docker-compose.yml:
create-react-app:
build:
context: create-react-app
ports:
- 3000:3000
environment:
- NODE_PATH=/node_modules
- CHOKIDAR_USEPOLLING=true
volumes:
- ./create-react-app:/create-react-app
Dockerfile:
FROM node:alpine
# Extend PATH
ENV PATH=$PATH:/node_modules/.bin
# Set working directory
WORKDIR /client
# Copy project files for build
ADD . .
# Install dependencies
RUN npm install
# Run create-react-app server
CMD ["npm", "run", "start"]

docker build IMAGE results in error but docker-compose up -d works fine

I am new to the Docker. I am trying to create a docker image for the NodeJS project which I will upload/host on Docker repository. When I execute docker-compose up -d everything works fine and I can access the nodeJS server that is hosted inside docker containers. After that, I stopped all containers and tried to create a docker image from Dockerfiles using following commands:
docker build -t adonis-app .
docker run adonis-app
The first command executes without any error but the second command throws this error:
> adonis-fullstack-app#4.1.0 start /app
> node server.js
internal/modules/cjs/loader.js:983
throw err;
^
Error: Cannot find module '/app/server.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:980:15)
at Function.Module._load (internal/modules/cjs/loader.js:862:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! adonis-fullstack-app#4.1.0 start: `node server.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the adonis-fullstack-app#4.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /app/.npm/_logs/2020-02-09T17_33_22_489Z-debug.log
Can someone help me with this error and tell me what is wrong with it?
Dockerfile I am using:
FROM node
ENV HOME=/app
RUN mkdir /app
ADD package.json $HOME
WORKDIR $HOME
RUN npm i -g #adonisjs/cli && npm install
CMD ["npm", "start"]
docker-compose.yaml
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
ports:
- '3306:3306'
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_USER: ${DB_USER}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_RANDOM_ROOT_PASSWORD: 1
networks:
- api-network
adonis-api:
container_name: "${APP_NAME}-api"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/node_modules
ports:
- "3333:3333"
depends_on:
- adonis-mysql
networks:
- api-network
networks:
api-network:
Your Dockerfile is missing a COPY step to actually copy your application code into the image. When you docker run the image, there's no actual source code to run, and you get the error you're seeing.
Your Dockerfile should look more like:
FROM node
WORKDIR /app # creates the directory; no need to set $HOME
COPY package.json package.lock .
RUN npm install # all of your dependencies are in package.json
COPY . . # actually copy the application in
CMD ["npm", "start"]
Now that your Docker image is self-contained, you don't need the volumes: that try to inject host content into it. You can also safely rely on several of Docker Compose's defaults (the default network and the generated container_name: are both fine to use). A simpler docker-compose.yml looks like
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
# As you have it, except delete the networks: block
adonis-api:
build: . # this directory is context:, use default dockerfile:
ports:
- "3333:3333"
depends_on:
- adonis-mysql
There's several key problems in the set of artifacts and commands you show:
docker run and docker-compose ... are separate commands. In the docker run command you show, it runs the image, as-is, with its default command, with no volumes mounted, and with no ports published. docker run doesn't know about the docker-compose.yml file, so whatever options you have specified there won't have an effect. You might mean docker-compose up, which will also start the database. (In your application remember to try several times for it to come up, it often can take 30-60 seconds.)
If you're planning to push the image, you need to include the source code. You're essentially creating two separate artifacts in this setup: a Docker image with Node and some libraries, and also a Javascript application on your host. If you docker push the image, it won't include the application (because you're not COPYing it in), so you'll also have to separately distribute the source code. At that point there's not much benefit to using Docker; an end user may as well install Node, clone your repository, and run npm install themselves.
You're preventing Docker from seeing library updates. Putting node_modules in an anonymous volume seems to be a popular setup, and Docker will copy content from the image into that directory the first time you run the application. The second time you run the application, Docker sees the directory already exists, assumes it to contain valuable user data, and refuses to update it. This leads to SO questions along the lines of "I updated my package.json file but my container isn't updating".
Your docker-compose.yaml file has two services:
adonis-mysql
adonis-api
Only the second item is using the current docker file. As can be seen by the following section:
build:
context: .
dockerfile: Dockerfile
The command docker build . will only build the image in current docker file aka adonis-api. And then it is run.
So most probably it could be the missing mysql service that is giving you the error. You can verify by running
docker ps -aq
to check if the sql container is also running. Hope it helps.
Conclusion: Use docker-compose.

Error while creating mount source path when using docker-compose in Windows

I am trying to dockerize my React-Flask app by dockerizing each one of them and using docker-compose to put them together.
Here the Dockerfiles for each app look like:
React - Frontend
FROM node:latest
WORKDIR /frontend/
ENV PATH /frontend/node_modules/.bin:$PATH
COPY package.json /frontend/package.json
COPY . /frontend/
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
CMD ["npm", "run", "start"]
Flask - Backend
#Using ubuntu as our base
FROM ubuntu:latest
#Install commands in ubuntu, including pymongo for DB handling
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
RUN python -m pip install pymongo[srv]
#Unsure of COPY command's purpose, but WORKDIR points to /backend
COPY . /backend
WORKDIR /backend/
RUN pip install -r requirements.txt
#Run order for starting up the backend
ENTRYPOINT ["python"]
CMD ["app.py"]
Each of them works fine when I just use docker build and docker up. I've checked that they work fine when they are built and ran independently. However, when I docker-compose up the docker-compose.yml which looks like
# Docker Compose
version: '3.7'
services:
frontend:
container_name: frontend
build:
context: frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- '.:/frontend'
- '/frontend/node_modules'
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- .:/code
Gives me the error below
Starting frontend ... error
Starting dashboard_backend_1 ...
ERROR: for frontend Cannot start service sit-frontend: error while creating mount source path '/host_mnt/c/Users/myid/DeskStarting dashboard_backend_1 ... error
ERROR: for dashboard_backend_1 Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for frontend Cannot start service frontend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for backend Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: Encountered errors while bringing up the project.
Did this happen because I am using Windows? What can be the issue? Thanks in advance.
For me the only thing that worked was restarting the Docker deamon
Check if this is related to docker/for-win issue 1560
I had the same issue. I was able to resolve it by running:
docker volume rm -f [name of docker volume with error]
Then restarting docker, and running:
docker-compose up -d --build
I tried these same steps without restarting my docker, but restarting my computer and that didn't resolve the issue.
What resolved the issue for me was removing the volume with the error, restarting my docker, then doing a build again.
Other cause:
On Windows this may be due to a user password change. Uncheck the box to stop sharing the drive and then allow Docker to detect that you are trying to mount the drive and share it.
Also mentioned:
I just ran docker-compose down and then docker-compose up. Worked for me.
I have tried with docker container prune then press y to remove all stopped containers. This issue has gone.
I saw this after I deleted a folder I'd shared with docker and recreated one with the same name. I think this deleted the permissions. To resolve it I:
Unshared the folder in docker settings
Restarted docker
Ran docker container prune
Ran docker-compose build
Ran docker-compose up.
Restarting the docker daemon will work.

Docker-compose volume mount before run

I have a Dockerfile I'm pointing at from a docker-compose.yml.
I'd like the volume mount in the docker-compose.yml to happen before the RUN in the Dockerfile.
Dockerfile:
FROM node
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
ENTRYPOINT gulp watch
docker-compose.yml
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
It makes complete sense for it to do the Dockerfile first, then mount from docker-compose, however is there a way to get around it.
I want to keep the Dockerfile generic, while passing more specific bits in from compose. Perhaps that's not the best practice?
Erik Dannenberg's is correct, the volume layering means that what I was trying to do makes no sense. (There is another really good explaination on the Docker website if you want to read more). If I want to have Docker do the npm install then I could do it like this:
FROM node
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
CMD ["gulp", "watch"]
However, this isn't appropriate as a solution for my situation. The goal is to use NPM to install project dependencies, then run gulp to build my project. This means I need read and write access to the project folder and it needs to persist after the container is gone.
I need to do two things after the volume is mounted, so I came up with the following solution...
docker/gulp/Dockerfile:
FROM node
RUN npm install --global gulp-cli
ADD start-gulp.sh .
CMD ./start-gulp.sh
docker/gulp/start-gulp.sh:
#!/usr/bin/env bash
until cd /usr/src/app && npm install
do
echo "Retrying npm install"
done
gulp watch
docker-compose.yml:
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
So now the container starts a bash script that will continuously loop until it can get into the directory and run npm install. This is still quite brittle, but it works. :)
You can't mount host folders or volumes during a Docker build. Allowing that would compromise build repeatability. The only way to access local data during a Docker build is the build context, which is everything in the PATH or URL you passed to the build command. Note that the Dockerfile needs to exist somewhere in context. See https://docs.docker.com/engine/reference/commandline/build/ for more details.

Resources