I want to run my node application with only one command : npm start and based on the NODE_ENV, it will either run nodemon app.js if NODE_ENV=dev or node app.js if NODE_ENV=production but in DETACHED MODE.
What I have so far :
package.json
"start": "docker-compose up --build --force-recreate api",
"api:dev": "nodemon app.js",
"api:prod": "node app.js"
Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json .
RUN npm install
ENTRYPOINT ["/bin/sh"]
CMD ["-c", "if [ \"$NODE_ENV\" = \"dev\" ]; then npm run api:dev; else npm run api:prod; fi"]
And finally the docker-compose file
api:
restart: always
container_name: my_api
build:
context: .
dockerfile: ./docker/api/Dockerfile
depends_on:
- postgres
ports:
- "${API_PORT}:${API_PORT}"
env_file: .env
networks:
- back
volumes:
- .:/app
- node_modules:/app/node_modules
logging:
options:
max-file: "10"
max-size: "10m"
Where/how can I setup the conditional detach mode? I know I should use -d for the detach docker-compose up but I want it conditional based on the NODE_ENV.
Related
When I run the following Dockerfile,in the middle of the process, after installing prisma, the RUN command db-migrate up is stopped. But when I used docker exec bin bash to run it, it worked without any problem. I don't think I can run the app before serving it. But there is a workaround like putting the migration commands as a service in docker-compose.yml. How to achieve it? or if there's any way to run those RUN commands of migration in this Dockerfile?
Dockerfile
FROM node:16.15.0-alpine
WORKDIR /app
COPY package*.json ./
# generated prisma files
COPY prisma ./prisma/
# COPY ENV variable
COPY .env ./
# COPY
COPY . .
RUN npm install
RUN npm install -g db-migrate
RUN npm install -g prisma
RUN db-migrate up
RUN prisma db pull
RUN prisma generate
EXPOSE 3000
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '3.8'
services:
mysqldb:
image: mysql:5.7
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
auth:
depends_on:
- mysqldb
build: ./auth
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
volumes:
db:
I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!
I used docker-compose to create docker containers in my React, Node.js, and Postgres structured project.
After I created Dockerfile and docker-compose.yml, I did docker up 'docker-compose up --build.'
Then, I wasn't be able to create containers, and get an errors.
I get Error says:
Error 1
Error 2
Error 3
How can I fix it and successfully build containers?
Here is a docker-compose.yml file in './'
version: '3'
services:
server:
container_name: mylivingcity_server
build: ./server
expose:
- 3001
ports:
- 3001:3001
volumes:
- ./server/config:/usr/src/app/server/config
- ./server/controllers:/usr/src/app/server/controllers
- ./server/db:/usr/src/app/server/db
command: npm run start
postgres:
image: postgres:12
container_name: mylivingcity_postgres
ports:
- 5432:5432
volumes:
- ./postgres/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=mylivingcity
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
frontend:
container_name: mylivingcity_frontend
build: ./frontend
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./frontend/src:/usr/src/app/frontend/src
- ./frontend/public:/usr/src/app/frontend/public
command: npm start
stdin_open: true
Here is a Dockerfile in './frontend'
FROM node:12
# Create frontend directory
RUN mkdir -p /usr/src/app/frontend/
WORKDIR /usr/src/app/frontend/
# Install dependencies
COPY package*.json /usr/src/app/frontend/
RUN npm install
COPY . /usr/src/app/frontend/
CMD [ "npm" , "start" ]
Here is a Dockerfile in './server'
FROM node:12
# Create server directory
RUN mkdir -p /usr/src/app/server/
WORKDIR /usr/src/app/server/
# Install dependencies
COPY package*.json /usr/src/app/server/
RUN npm install
COPY . /usr/src/app/server/
CMD [ "npm" , "run" , "start" ]
I'm using AWS ECS repository for docker images.
My docker-compose.yml file looks like:
version: "3"
services:
my-front-end:
image: myFrontEndImage:myTag
links:
- my-back-end
ports:
- "8080:8080"
logging:
driver: 'json-file'
options:
max-size: "50m"
my-back-end:
image: myBackEndImage:myTag
ports:
- "3000:3000"
logging:
driver: 'json-file'
options:
max-size: "50m"
And what I need is, to be able to pass a environment variable from the docker-compose file, into my docker image.
What I tried was adding the lines for environment (following the example).
version: "3"
services:
my-front-end:
image: myFrontEndImage:myTag
links:
- my-back-end
environment:
- BACKEND_SERVER_PORT=3001
ports:
- "8080:8080"
logging:
driver: 'json-file'
options:
max-size: "50m"
my-back-end:
image: myBackEndImage:myTag
ports:
- "3000:3000"
logging:
driver: 'json-file'
options:
max-size: "50m"
And then in my project (which is a VueJS project) I'm trying to access it by process.env.BACKEND_SERVER_PORT. But I do not see my value and when I tried console.log(process.env); I see that it has only the value {NODE_ENV: "development"}.
So my question here is, how to pass the env variable from the docker-compose to my docker image, so that I will be able to use it inside my project?
Everything in the project works fine, I'm working on this project a long time and docker-compose file works, it's just, now when I have a need of adding this environment variable, I can't make it work.
EDIT: adding few more files, per request in comment.
The .Dockerfile for my-front-end looks like:
FROM node:8.11.1
WORKDIR /app
COPY package*.json ./
RUN npm i npm#latest -g && \
npm install
COPY . .
CMD ["npm", "start"]
As mentioned, this is an VueJS application and here is the part of package.json which you may be interested in:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"build": "node build/build.js"
},
While the .Dockerfile for my-back-end looks like:
FROM node:8.11.1
WORKDIR /app/server
COPY package*.json ./
RUN npm i npm#latest -g && \
npm install
COPY . .
CMD ["npm", "start"]
My back-end is actually an express.js app that is listening on a separate port and the app is placed in a folder server under the root of the project.
Here is the part of package.json which you may be interested in:
"scripts": {
"start": "nodemon src/app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
I think you are doing everything right in terms of configuring docker-compose. And it seems like there is some nuances on passing an environment variable to VueJS application.
According to answers to this question you need to name your variables with VUE_APP_* to be able to get them from client-side
Docker-compose.yaml:
version: "3"
services:
mysql:
image: mysql:5.7
environment:
MYSQL_HOST: localhost
MYSQL_DATABASE: mydb
MYSQL_USER: mysql
MYSQL_PASSWORD: 1234
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
expose:
- 3307
volumes:
- /var/lib/mysql
- ./mysql/migrations:/docker-entrypoint-initdb.d
restart: unless-stopped
web:
build:
context: .
dockerfile: web/Dockerfile
volumes:
- ./:/web
ports:
- "32768:3000"
environment:
NODE_ENV: development
PORT: 3000
links:
- mysql:mysql
depends_on:
- mysql
expose:
- 3000
command: ["./wait-for-it.sh", "mysql:3306", "--", "npm start"]
Web Dockerfile:
FROM node:6.11.2-slim
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
CMD [ "npm", "start" ] # So this is overridden by the wait script and doesn't execute
I'm using this wait script:
https://github.com/vishnubob/wait-for-it
The wait script works fine, however it overrides the existing start command for the web container:
CMD [ "npm", "start" ]
As you can see in the docker-compose file I'm using this approach to kick off npm start:
command: ["./wait-for-it.sh", "mysql:3306", "--", "npm start"]
I have tried a few alternative e.g:
command: ["./wait-for-it.sh", "mysql:3306", "--", "CMD ['npm', 'start'"]
command: ["./wait-for-it.sh", "mysql:3306", "--", "docker-entrypoint.sh"]
Only it is not working. I get this error from the web container:
web_1 | ./wait-for-it.sh: line 174: exec: npm start: not found
What's going on here?
So first of all if you use command in docker-compose then it will override the the CMD and that is an expected behavior. How can docker know you want to execute both of them.
Next your approach is a bit wrong with the CMD
command: ["./wait-for-it.sh", "mysql:3306", "--", "npm start"]
translates to you executing
./wait-for-it.sh mysql:3306 -- "npm start"
Which is supposed to fail as there is no command npm start it is npm which takes start as the argument. So change the command to
command: ["./wait-for-it.sh", "mysql:3306", "--", "npm", "start"]
or
command: ./wait-for-it.sh mysql:3306" -- npm start
Whichever format you like