So I am trying to use environmental variables in react. So when ever I send my app to some one I can have them view it on their screen with out them having to use any type of server.But I have this error below
enter image description here
Also I will show you my env files and my package.json
.env.development
REACT_APP_API_PATH=http://localhost:3200
.env.production
REACT_APP_API_PATH=https://crud-application-x.herokuapp.com
"scripts": {
"start": "env-cmd .env.development react-scripts start",
"build": "env-cmd .env.production react-scripts build",
"test": " env-cmd .env.development react-scripts test",
"eject": "react-scripts eject"
},
Above is the package.json scripts
and this is the link of my UI
https://crud-application-x.herokuapp.com/getstudents
So this application works only for my development side. This is suppose to be a crud application but if you guys open it you will not see anything. All I want to do is try to put this in to full production so that anyone can access this with out having to run on any server
You need to add the -f flag before the .env filename
"scripts": {
"start": "env-cmd -f .env.development react-scripts start",
"build": "env-cmd -f .env.production react-scripts build",
"test": " env-cmd -f .env.development react-scripts test",
"eject": "react-scripts eject"
},
Related
I have a dockerized angular app and the Dockerfile looks like this:
FROM node:16.13.0-alpine as builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
FROM nginx:1.17.10-alpine
EXPOSE 80
COPY --from=builder /app/dist /usr/share/nginx/html
and every time I run docker run it opens me the ngnix page, but I want it to load my project.
My folder structure looks like this
these are my package.json scripts:
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build",
"build:prod": "ng build opentelemetry-interceptor --prod",
"test": "jest --coverage",
"lint": "ng lint",
"release": "standard-version",
"e2e": "ng e2e",
"cypress": "concurrently -k -p \"[{name}]\" -n \"backend,interceptor-example,cypress\" -c \"green.bold,cyan.bold,yellow.bold\" \"npm run start:backend-interceptor-example\" \"npm start interceptor-example\" \"cypress open\"",
"cypress:run": "concurrently -k -s first -p \"[{name}]\" -n \"backend,interceptor-example,cypress\" -c \"green.bold,cyan.bold,yellow.bold\" \"npm run start:backend-interceptor-example\" \"npm start interceptor-example\" \"cypress run\"",
"start:backend-interceptor-example": "node ./projects/interceptor-example/src/backend-api.js",
"start:complete-interceptor-example": "concurrently -k -p \"[{name}]\" -n \"backend,interceptor-example\" -c \"green.bold,cyan.bold\" \"npm run start:backend-interceptor-example\" \"npm start interceptor-example\"",
"compodoc": "npx compodoc -t -p projects/opentelemetry-interceptor/tsconfig.lib.json --theme material -d ./docs -n \"OpenTelemetry Angular Interceptor\""
},
My question is: do I have to do something in "build" to trigger "start:complete-interceptor-example" or do I have to modify the Dockerfile?. The "start:complete-interceptor-example" is running my app and I want that to happen. It's a little bit confusing to me. In Dockerfile I tried to write npm run start:complete-interceptor-example, at some point it says compiled successfully, but just froze there. Thank you so much for your time!
If you want your app to run using that command, then you have to have Node in your image and you can't use Nginx to serve your app.
You put the command in a CMD directive in the Dockerfile, so Docker will run the command when the container starts. Something like this
FROM node:16.13.0-alpine as builder
WORKDIR /app
COPY . .
RUN npm install
CMD npm run start:complete-interceptor-example
Your app probably listens on a different port than port 80. So remember to map the right port to a host port when you run the image.
This question is specific to my docker configuration.
Super new to Docker and hence this problem.
I tried all the possible solutions on the internet, but none of them worked.
Closest Question: React.js Docker - Module not found
Dockerization of React App
Below are my docker files
Dockerfile
FROM node:10
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install dependencies
COPY package*.json ./
RUN npm install --silent
RUN npm install react-scripts -g --silent
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
My system has node version 10.15.3
docker-compose.yml
version: '3'
services:
django:
build: ./api
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
volumes:
- ./api:/app
ports:
- "8000:8000"
frontend:
build: ./frontend
volumes:
- ./frontend:/app
- /app/node_modules
ports:
- "3000:3000"
volumes:
node-modules:
Folder Structure:
api and frontend both have Dockerfile in it, but it's just the frontend Dockerfile that is causing the issue.
Cache messages
package.json
{
"name": "frontend",
"version": "0.1.0",
"private": true,
"dependencies": {
"#testing-library/jest-dom": "^5.11.5",
"#testing-library/react": "^11.1.0",
"#testing-library/user-event": "^12.1.10",
"moment": "^2.29.1",
"react": "^17.0.1",
"react-big-calendar": "^0.28.1",
"react-dom": "^17.0.1",
"react-router-dom": "^5.2.0",
"react-scripts": "4.0.0",
"web-vitals": "^0.2.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
I faced the same issue but below steps helped me solve the issue.
While adding a new package to our React project and running it with docker-compose following these steps:
Stop any docker-composeif running, with docker-compose down -v
Now add your npm module in your react application, npm install react-plotly.js (in your case)
docker-compose up -d --build
After looking at your docker file it looks fine to me, so I think it's the way you're installing the package is causing the issue.
I'm building a Docker image for debugging of my React application with a separate Dockerfile
FROM node:11-alpine
COPY package.json .
COPY yarn.lock .
RUN yarn install
COPY public/ ./public/
COPY src/ ./src/
EXPOSE 3000
CMD yarn run start
with package.json
{
"name": "yarn-start-in-kubernetes",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.8.1",
"react-dom": "^16.8.1",
"react-scripts": "2.1.3",
"babel-loader": "8.0.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"browserslist": [
">0.2%",
"not dead",
"not ie <= 11",
"not op_mini all"
]
}
which starts the development server as expected when the image is used with docker run. An upgrade to
"react-scripts": "2.1.3",
"babel-loader": "8.0.4"
causes the setup to be no longer usable because the development server terminates:
> docker run dev
yarn run v1.15.2
$ react-scripts start
ℹ 「wds」: Project is running at http://172.17.0.2/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /public
ℹ 「wds」: 404s will fallback to /
Starting the development server...
Done in 2.72s.
The docker run returns after Done in ....
I'd like to use the more up-to-date versions which work fine in production. How can I make them work in the debugging image?
The versions don't seem to affect the functioning of the development server outside Docker, i.e. yarn start works with both version combinations.
I was having the same issue and just added:
stdin_open: true
to my docker-compose.yml
Adding stdin_open: true or tty: true to the docker-compose file works:
services:
app:
tty: true
stdin_open: true # without this node doesn't start
stdin_open stands for standard input (interactive) and tty means terminal.
7e6d6cd Indicates a check for interactivity.
https://docs.docker.com/compose/reference/run/
If you are not using docker-compose you must use -it together in order to allocate a tty for the container process.
https://docs.docker.com/engine/reference/run/#foreground
I am trying to install my local npm packages (using yarn workspaces) in my docker container but it does not seem to be installing the local packages. Though it does install the global packages successfully.
Also, when I bash into the container and run "lerna bootstrap", that installs my local packages successfully. I have been searching all over the internet why is this happening.
My docker file's content looks like
FROM node:12.4.0-alpine
RUN apk add --no-cache bash
RUN apk add --no-cache yarn
WORKDIR "/app"
RUN yarn global add lerna
COPY . .
RUN yarn install
RUN lerna bootstrap
CMD ["npm", "run", "dev"]
My root package.json file looks like
{
"name": "my-test",
"version": "1.0.0",
"description": "Test",
"main": "app.js",
"private": true,
"workspaces": [
"packages/**"
],
"scripts": {
"start": "node app.js",
"dev": "nodemon -L app.js"
},
"author": "Phantom",
"license": "ISC",
"dependencies": {
"config": "3.0.1",
"dotenv": "7.0.0",
"express": "4.16.4",
"node-locale": "2.0.0",
},
"devDependencies": {
"lerna": "^3.15.0",
"#lerna/bootstrap": "3.8.5",
"nodemon": "1.18.10",
}
}
My lerna.json file looks like
{
"version": "1.0.0",
"npmClient": "yarn",
"useWorkspaces": true,
"packages": ["packages/*"]
}
As a workaround, I am runing the following docker command once the container is up.
docker exec -w /app <my-container-name> lerna bootstrap
I know this is not a proper solution, so can someone please help me out?
How do you remove files dependencies from a docker image? Everytime I build and upload a new docker image it takes layers and information from past images. How do I remove this connection so that all docker builds are independent from all others images.
Eg: I had a working file, and when I reuploaded it with no changes it stopped working and since then, I can't reupload a working file.
dockerfile:
FROM 10.119.222.132:5000/node:8.5.0-wheezy
ENV http_proxy=http://www-proxy.abc.ca:3128/
ENV https_proxy=http://www-proxy.abc.ca:3128/
ENV PORT 5000
RUN apt-get update
WORKDIR /usr/src/app
COPY package.json /usr/src/app
COPY . .
CMD ["node", "server.js"]
package.json:
{
"name": "api-proxy",
"version": "1.0.0",
"description": "API Gateway",
"main": "server.js",
"dependencies": {
"body-parser": "^1.17.2",
"crypto": "^1.0.1",
"cors": "^2.8.4",
"express": "^4.15.4",
"jsonwebtoken": "^7.4.2",
"mongodb": "^2.2.31",
"request": "^2.81.0"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
},
"author": "",
"license": "ISC"
}
You could build it using
docker build --no-cache my_new_image:v1.0.0 .