Bitnami/Express 4.16.4 - npm install - docker

I need to install other node.js modules on the bitnami docker container.
I would like to install body-parser module to the container. I've started the container with sudo docker-compose up and it runs fine. i tried to modify the dockerfile and docker-compose.yml file to install the body-parser but i get EACCES permission denied, access '/app/node_modules' error. Can you help?
TIA,
Thomas
**** UPDATE 4/23/2019 ***
This is the docker file.
I added body-parser line.
## Dockerfile for building production image
FROM bitnami/express:4.16.4-debian-9-r166
LABEL maintainer "John Smith <john.smith#acme.com>"
ENV DISABLE_WELCOME_MESSAGE=1
ENV NODE_ENV=production \
PORT=3000
# Skip fetching dependencies and database migrations for production image
ENV SKIP_DB_WAIT=0 \
SKIP_DB_MIGRATION=1 \
SKIP_NPM_INSTALL=1 \
SKIP_BOWER_INSTALL=1
COPY . /app
RUN sudo chown -R bitnami: /app
RUN npm install
RUN npm install --save body-parser
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '2'
services:
mongodb:
image: 'bitnami/mongodb:latest'
express:
tty: true # Enables debugging capabilities when attached to this container.
image: 'bitnami/express:4'
command: npm start
environment:
- PORT=3000
- NODE_ENV=development
- DATABASE_URL=mongodb://mongodb:27017/myapp
- SKIP_DB_WAIT=0
- SKIP_DB_MIGRATION=0
- SKIP_NPM_INSTALL=0
- SKIP_BOWER_INSTALL=0
depends_on:
- mongodb
ports:
- 3000:3000
volumes:
- .:/app

Related

Run commands on docker container and sync automatically with host

I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!

Nx mono repo with NestJs & angularJs not reloading in container

I have created a NX monorepo with angular and nestJS apps and tried very hard to make the reload work inside containers but to no avail. Even though the directories are mounted correctly and I verified that changes in the host are being written inside the container but somehow the process is not picking them up.
I have created a standalone nestJS application and successfully made it work with the container.
Github repo: https://github.com/navdbaloch/dockerized-development-with-nx-monorepo-angular-nestjs
ENV: windows 10 with WSL2, Docker Desktop 4.2.0
Follow is the docker-compose.xml file
version: '3.7'
services:
frontend:
container_name: test-frontend
hostname: poirot_frontend
image: poirot_frontend
build:
context: .
dockerfile: ./apps/fwa/Dockerfile.angular
target: development
ports:
- 4200:4200
networks:
- poirot-network
depends_on:
- api
volumes:
- .:/usr/src
- /usr/src/node_modules
command: npm run start:app
api:
container_name: test-api
hostname: poirot_api
image: poirot_api
build:
context: .
dockerfile: ./apps/fwa-api/Dockerfile.api
target: development
volumes:
- .:/usr/src
- /usr/src/node_modules
ports:
- 3333:3333
- 9229:9229
command: npm run start:api
env_file:
- .env
networks:
- poirot-network
networks:
poirot-network:
driver: bridge
Dockerfile.angular
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist && \
npm install --only=development
COPY . .
RUN npm run build:app
#! this is the production image
FROM nginx:latest as production
COPY ./docker/angular.conf /etc/nginx/nginx.conf
COPY --from=development /usr/src/dist/apps/fwa /usr/share/nginx/html
Dockerfile.api
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist &&\
npm install --only=development
COPY . .
RUN npm run build:api
#! this is the production image
FROM node:14-alpine as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /app
COPY package*.json ./
RUN npm install minimist typescript ts-node lodash reflect-metadata tslib rxjs #nestjs/platform-express #types/bcrypt && \
npm install --only=production
COPY . .
COPY --from=development /usr/src/dist/apps/fwa-api ./dist
EXPOSE 3333
#! Migration runenr command: node_modules/ts-node/dist/bin.js migration-runner.ts
CMD ["node", "dist/main"]
Finally, I was able to make it work after a lot of trial and error.
For angular application, changed server command from npx nx serve to npx nx serve --host 0.0.0.0 --poll 2000.
For the Api, add "poll": 2000 option in angular.json at projects.api.architect.build.options
I have also updated Github repo for reference to anyone looking for the same solution.

Docker: ERROR: Service 'app' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder818844007/entrypoint.sh: no such file or directory

I'm having problems when I build the container with MongoDB, when using the docker-compose up I get the following error
ERROR: Service 'app' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder367230859/entrypoint.sh: no such file or directory
I tried to change the mongo to PostgreSQL, but continue.
my files are below, thanks in advance
that Dockerfile
version: '3'
services:
web:
image: nginx
restart: always
# volumes:
# - ${APPLICATION}:/var/www/html
# - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
# - ${NGINX_SITES_PATH}:/etc/nginx/conf.d
ports:
- "80:80"
- "443:443"
networks:
- web
mongo:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
ports:
- "27017:27017"
# volumes:
# - data:/data/db
networks:
- mongo
app:
build: .
volumes:
- .:/mm_api
ports:
- 3000:3000
depends_on:
- mongo
networks:
web:
driver: bridge
mongo:
driver: bridge´´
that docker-compose
FROM ruby:2.7.0
RUN apt-get update -qq && apt-get install -y nodejs
RUN mkdir /mm_api
WORKDIR /mm_api
COPY Gemfile /mm_api/Gemfile
COPY Gemfile.lock /mm_api/Gemfile.lock
RUN bundle install
COPY . /mm_api
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-C", "config/puma,rb"]
#CMD ["rails", "server", "-b", "0.0.0.0"]
that entry point
#!/bin/bash
set -e
rm -f /mm_api/tmp/pids/server.pid
exec "$#"
I had a similar issue when working on a Rails 6 application using Docker.
When I run docker-compose build, I get the error:
Step 10/16 : COPY Gemfile Gemfile.lock ./
ERROR: Service 'app' failed to build : COPY failed: stat /var/lib/docker/tmp/docker-builder408411426/Gemfile.lock: no such file or directory
Here's how I fixed it:
The issue was that the Gemfile.lock was missing in my project directory. I had deleted when I was having some issues with my gem dependencies.
All I had to do was to run the command below to install the necessary gems and then re-create the Gemfile.lock:
bundle install
And then this time when I ran the command docker-compose build everything worked fine again.
So whenever you encounter this issue endeavour to check if the file is present in your directory and most importantly if the path you specified to the file is the correct one.
That's all.
I hope this helps

Unable to npm install in a node docker image

FROM node:latest
WORKDIR /frontend/
ENV PATH /frontend/node_modules/.bin:$PATH
COPY package.json /frontend/package.json
COPY . /frontend/
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
CMD ["npm", "run", "start"]
This is my Dockerfile for the frontend of my project.
I put this as one of the services in my docker-compose.yml file, and when I run docker-compose up -d --build, it gives me
Step 6/8 : RUN npm install --silent
---> Running in 09a4f59a96fa
ERROR: Service 'frontend' failed to build: The command '/bin/sh -c npm install --silent' returned a non-zero code: 1
My docker-compose file looks like below for your reference:
# Docker Compose
version: '3.7'
services:
frontend:
container_name: frontend
build:
context: frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- '.:/frontend'
- '/frontend/node_modules'
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- .:/code
Thanks in advance
EDIT: Error in the frontend after build
For docker-compose, I think it should be like
- ./frontend/:/frontend'
as the build context is frontend.
Second, thing if you are using volume then why you are installing and copying code in Dockerfile? If you are using bind volume then remove these from your Dockerfile as these will be overridden from the host code.
COPY package.json /frontend/package.json
COPY . /frontend/

Dockerfile and docker-compose.yaml for different environments

docker-compose for prod:
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build: .
ports:
- "443:443"
links:
- db
volumes:
- /www/node_modules
Dockerfile for prod:
FROM alpine:3.4
LABEL authors="John Doe"
RUN apk add --update nodejs bash git
COPY package.json /www/package.json
RUN cd /www; apk --no-cache add --virtual builds-deps build-base python && npm install && npm rebuild bcrypt --build-from-source && apk del builds-deps
COPY . /www
WORKDIR /www
ENV PORT 8080
EXPOSE 8080
CMD ["npm", "start"]
docker-compose for dev:
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build: .
ports:
- "8080:8080"
links:
- db
volumes:
- .:/www
- /www/node_modules
Dockerfile for dev
FROM alpine:3.4
LABEL authors="John Doe"
RUN apk add --update nodejs bash git
COPY package.json /www/package.json
RUN cd /www; apk --no-cache add --virtual builds-deps build-base python && npm install && npm rebuild bcrypt --build-from-source && apk del builds-deps
WORKDIR /www
ENV PORT 8080
EXPOSE 8080
CMD ["npm", "run", "dev"]
I'm running it with docker-compose up.
Right now i have to manually make changes to files in order to change environment, which is, of course, wrong way to do this.
I assume there should be a way to avoid these manual changes. How do i do that?
You can specify environment in the services part of the docker-compose.yml file.
Example:
services:
environment:
NODE_ENV: "development"
APP_PORT: 5000
DB_URI: "<DB URI>"
And in your code, you can take these values by specifying process.env.NODE_ENV
Dockerfile should contain the commands for creating the image. That image when used with docker-compose api-server service of yours will run the server as required.
For eg., in your case:
Your Dockerfile should look something like this.
FROM alpine:3.4
LABEL authors="John Doe"
RUN apk add --update nodejs bash git
RUN mkdir /www
WORKDIR /www
ADD package.json /www/package.json
RUN apk --no-cache add --virtual builds-deps build-base python && npm install && npm rebuild bcrypt --build-from-source && apk del builds-deps
This will create your image.
regarding your docker-compose.yml file, use two separate docker-compose files for both production and development. Use env files to separate out development and production variables. you can check in the development docker-compose.yml and development env file in your repository, production docker-compose and environment files will be specific to your production server.
Your sample docker-compose.yml file should look something like this
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
links:
- db
env_file:
development.env
volumes:
- ./:/www
- /www/node_modules # I really don't understand this statement
command: >
/bin/ash -c "npm run dev"
This will be running your development server.
Similarly a same docker-compose.yml file with different ports exposed for production -443:443 in your case and env_file to production.env and command set to /bin/ash -c "npm start" will help run your production server.
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build:
context: .
dockerfile: Dockerfile
ports:
- "443:443"
links:
- db
env_file:
production.env
volumes:
- ./:/www
- /www/node_modules # I really don't understand this statement
command: >
/bin/ash -c "npm start"
In case you are running the development server and production server (never advisable) in the same machine you can create two files named docker-compose-development.yml and docker-compose-production.yml for development and production systems respectively and then you can run the server by using the command:
sudo docker-compose -f docker-compose-development.yml up
sudo docker-compose -f docker-compose-production.yml up
for development and production system respectively.
you can use environment variables as
for example
set env as export env="prod" in you local machine terminal
and in docker-compose-file
image: container_image_${env} or image: container_image:${env}
will create images as container_image_prod or container_image:prod
you can also set the service name for db as db_${env} so that you get the service name according to the environment as db_prod in this case similarly you can do for other services if required

Resources