Run commands on docker container and sync automatically with host - docker

I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!

Related

How to run db-migrate up on Dockerfile?

When I run the following Dockerfile,in the middle of the process, after installing prisma, the RUN command db-migrate up is stopped. But when I used docker exec bin bash to run it, it worked without any problem. I don't think I can run the app before serving it. But there is a workaround like putting the migration commands as a service in docker-compose.yml. How to achieve it? or if there's any way to run those RUN commands of migration in this Dockerfile?
Dockerfile
FROM node:16.15.0-alpine
WORKDIR /app
COPY package*.json ./
# generated prisma files
COPY prisma ./prisma/
# COPY ENV variable
COPY .env ./
# COPY
COPY . .
RUN npm install
RUN npm install -g db-migrate
RUN npm install -g prisma
RUN db-migrate up
RUN prisma db pull
RUN prisma generate
EXPOSE 3000
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '3.8'
services:
mysqldb:
image: mysql:5.7
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
auth:
depends_on:
- mysqldb
build: ./auth
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
volumes:
db:

Docker doesn't create custom node-red image

I'm new to docker and I've been trying to create a custom node-red image with custom flow for influxdb. Docker doesn't seem to use my dockerfile for the image creation.
This is my docker-compose:
node-red:
container_name: node-red
build: node-red
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
volumes:
- node-red-data:/tmp/node-red_data
networks:
- node-red-net
Then, inside a folder called node-red I have this dockerfile:
FROM nodered/node-red AS base
COPY package.json .
RUN npm install --only=production
COPY nodered_flow.json /data/flows.json
CMD ["npm", "start"]
Both the package.json and the nodered_flow.json are in the same folder as the dockerfile. What am I doing wrong here?
In service.build, you need the path to the build context, in case the it is in the same directory you could use .. So this should work:
node-red:
container_name: node-red
build: .
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
volumes:
- node-red-data:/tmp/node-red_data
networks:
- node-red-net
For more information check: https://docs.docker.com/compose/compose-file/compose-file-v3/#build
Your Dockerfile is wrong, it's putting your package.json in /usr/src/node-red and when you run npm install it will remove Node-RED.
It should look like this:
FROM nodered/node-red AS base
WORKDIR /data
COPY package.json /data
COPY nodered_flow.json /data/flows.json
RUN npm install --only=production
WORKDIR /usr/src/node-red

Nx mono repo with NestJs & angularJs not reloading in container

I have created a NX monorepo with angular and nestJS apps and tried very hard to make the reload work inside containers but to no avail. Even though the directories are mounted correctly and I verified that changes in the host are being written inside the container but somehow the process is not picking them up.
I have created a standalone nestJS application and successfully made it work with the container.
Github repo: https://github.com/navdbaloch/dockerized-development-with-nx-monorepo-angular-nestjs
ENV: windows 10 with WSL2, Docker Desktop 4.2.0
Follow is the docker-compose.xml file
version: '3.7'
services:
frontend:
container_name: test-frontend
hostname: poirot_frontend
image: poirot_frontend
build:
context: .
dockerfile: ./apps/fwa/Dockerfile.angular
target: development
ports:
- 4200:4200
networks:
- poirot-network
depends_on:
- api
volumes:
- .:/usr/src
- /usr/src/node_modules
command: npm run start:app
api:
container_name: test-api
hostname: poirot_api
image: poirot_api
build:
context: .
dockerfile: ./apps/fwa-api/Dockerfile.api
target: development
volumes:
- .:/usr/src
- /usr/src/node_modules
ports:
- 3333:3333
- 9229:9229
command: npm run start:api
env_file:
- .env
networks:
- poirot-network
networks:
poirot-network:
driver: bridge
Dockerfile.angular
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist && \
npm install --only=development
COPY . .
RUN npm run build:app
#! this is the production image
FROM nginx:latest as production
COPY ./docker/angular.conf /etc/nginx/nginx.conf
COPY --from=development /usr/src/dist/apps/fwa /usr/share/nginx/html
Dockerfile.api
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist &&\
npm install --only=development
COPY . .
RUN npm run build:api
#! this is the production image
FROM node:14-alpine as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /app
COPY package*.json ./
RUN npm install minimist typescript ts-node lodash reflect-metadata tslib rxjs #nestjs/platform-express #types/bcrypt && \
npm install --only=production
COPY . .
COPY --from=development /usr/src/dist/apps/fwa-api ./dist
EXPOSE 3333
#! Migration runenr command: node_modules/ts-node/dist/bin.js migration-runner.ts
CMD ["node", "dist/main"]
Finally, I was able to make it work after a lot of trial and error.
For angular application, changed server command from npx nx serve to npx nx serve --host 0.0.0.0 --poll 2000.
For the Api, add "poll": 2000 option in angular.json at projects.api.architect.build.options
I have also updated Github repo for reference to anyone looking for the same solution.

How to build docker file for a VueJs application

I am new in docker. I've built an application with VueJs2 that interacts with an external API. I would like to run the application on docker.
Here is my docker-compose.yml file
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
Here is my Dockerfile:
FROM node:14.17.0-alpine as develop-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN yarn install
COPY . .
EXPOSE 8080
CMD ["node"]
Here is the building command I run to build my image an container.
docker-compose up -d
The image and container is building without error but when I run the container it stops immediately. So the container is not running.
Are the DockerFile and compose files set correctly?
First of all you run npm install and yarn install, which is doing the same thing, just using different package managers. Secondly you are using CMD ["node"] which does not start your vue application, so there is no job running and docker is shutting down.
For vue applicaton you normally want to build the app with static assets and then run a simple http server to serve the static content.
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy 'package.json' to install dependencies
COPY package*.json ./
# install dependencies
RUN npm install
# copy files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Your docker-compose file could be as simple as
version: "3.7"
services:
vue-app:
build:
context: .
dockerfile: Dockerfile
container_name: vue-app
restart: always
ports:
- "8080:8080"
networks:
- vue-network
networks:
vue-network:
driver: bridge
to run the service from docker-compose use command property in you docker-compose.yml.
services:
vue-app:
command: >
sh -c "yarn serve"
I'm not sure about the problem but by using command: tail -f /dev/null in your docker-compose file , it will keep up your container so you could track the error within it and find its problem. You could do that by running docker exec -it <CONTAINER-NAME> bash and track the error logs in your container.
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
command: tail -f /dev/null
ports:
- '8080:8080'
In your Dockerfile you have to start your application e.g. npm run start or any other scripts that you are using for running your application in your package.json.

How to run gulp after "docker-compose up"?

Can you help with running gulp tasks inside the Docker container with docker-compose, so it would compile SCSS files?
My file structure at host machine:
/application
--/config
--/models
--/public
----/scss
----/css
----index.html
----gulfile.js
--/routes
.dockerignore
.gitignore
Dockerfile
docker-compose.yml
package.json
server.js
/application/public/gulpfile.js:
var gulp = require('gulp');
var sass = require('gulp-sass');
gulp.task('sass', function() {
gulp.src('./scss/styles.scss')
.pipe(sass().on('error', sass.logError))
.pipe(gulp.dest('./css'));
});
gulp.task('sass:watch', function() {
gulp.watch('./scss/styles.scss', ['sass']);
});
Dockerfile:
FROM node:6
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install nodemon -g && npm install bower -g && npm install gulp -g
RUN cd /app
RUN npm install
RUN cd /app/public && bower install --allow-root
RUN cd /app
COPY . /app
docker-compose.yml:
version: '2'
services:
web:
build: .
command: nodemon /app/server.js
volumes:
- .:/app/
- /app/node_modules
- /app/public
- /app/config
- /app/models
- /app/routes
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:latest
ports:
- "27018:27017"
restart: always
Ideally, it would be good to run the command from the docker-compose.yml file to start watching scss files inside the /application/public folder, but I wasted couple of days to solve this problem.
Also, I tried to run gulp inside the container. Actually, it works ok, but changes were not reflected at host machine.
Please do not suggest to use ready-made Docker-Hub images. I have used them and they did not solve my issue.
I'll be thankful for any help, links, info or ready-solution.
You can add a separate service in your compose file to run a separate command in a separate container (see the web line & the gulp service line below
version: '2'
services:
web: &web
build: .
command: nodemon /app/server.js
volumes:
- .:/app/
- /app/node_modules
- /app/public
- /app/config
- /app/models
- /app/routes
ports:
- "3000:3000"
depends_on:
- mongo
gulp: *web
command: <gulp command to watch files>
mongo:
image: mongo:latest
ports:
- "27018:27017"
restart: always

Resources