How to run gulp after "docker-compose up"? - docker

Can you help with running gulp tasks inside the Docker container with docker-compose, so it would compile SCSS files?
My file structure at host machine:
/application
--/config
--/models
--/public
----/scss
----/css
----index.html
----gulfile.js
--/routes
.dockerignore
.gitignore
Dockerfile
docker-compose.yml
package.json
server.js
/application/public/gulpfile.js:
var gulp = require('gulp');
var sass = require('gulp-sass');
gulp.task('sass', function() {
gulp.src('./scss/styles.scss')
.pipe(sass().on('error', sass.logError))
.pipe(gulp.dest('./css'));
});
gulp.task('sass:watch', function() {
gulp.watch('./scss/styles.scss', ['sass']);
});
Dockerfile:
FROM node:6
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install nodemon -g && npm install bower -g && npm install gulp -g
RUN cd /app
RUN npm install
RUN cd /app/public && bower install --allow-root
RUN cd /app
COPY . /app
docker-compose.yml:
version: '2'
services:
web:
build: .
command: nodemon /app/server.js
volumes:
- .:/app/
- /app/node_modules
- /app/public
- /app/config
- /app/models
- /app/routes
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:latest
ports:
- "27018:27017"
restart: always
Ideally, it would be good to run the command from the docker-compose.yml file to start watching scss files inside the /application/public folder, but I wasted couple of days to solve this problem.
Also, I tried to run gulp inside the container. Actually, it works ok, but changes were not reflected at host machine.
Please do not suggest to use ready-made Docker-Hub images. I have used them and they did not solve my issue.
I'll be thankful for any help, links, info or ready-solution.

You can add a separate service in your compose file to run a separate command in a separate container (see the web line & the gulp service line below
version: '2'
services:
web: &web
build: .
command: nodemon /app/server.js
volumes:
- .:/app/
- /app/node_modules
- /app/public
- /app/config
- /app/models
- /app/routes
ports:
- "3000:3000"
depends_on:
- mongo
gulp: *web
command: <gulp command to watch files>
mongo:
image: mongo:latest
ports:
- "27018:27017"
restart: always

Related

How to run db-migrate up on Dockerfile?

When I run the following Dockerfile,in the middle of the process, after installing prisma, the RUN command db-migrate up is stopped. But when I used docker exec bin bash to run it, it worked without any problem. I don't think I can run the app before serving it. But there is a workaround like putting the migration commands as a service in docker-compose.yml. How to achieve it? or if there's any way to run those RUN commands of migration in this Dockerfile?
Dockerfile
FROM node:16.15.0-alpine
WORKDIR /app
COPY package*.json ./
# generated prisma files
COPY prisma ./prisma/
# COPY ENV variable
COPY .env ./
# COPY
COPY . .
RUN npm install
RUN npm install -g db-migrate
RUN npm install -g prisma
RUN db-migrate up
RUN prisma db pull
RUN prisma generate
EXPOSE 3000
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '3.8'
services:
mysqldb:
image: mysql:5.7
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
auth:
depends_on:
- mysqldb
build: ./auth
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
volumes:
db:

Run commands on docker container and sync automatically with host

I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!

Nx mono repo with NestJs & angularJs not reloading in container

I have created a NX monorepo with angular and nestJS apps and tried very hard to make the reload work inside containers but to no avail. Even though the directories are mounted correctly and I verified that changes in the host are being written inside the container but somehow the process is not picking them up.
I have created a standalone nestJS application and successfully made it work with the container.
Github repo: https://github.com/navdbaloch/dockerized-development-with-nx-monorepo-angular-nestjs
ENV: windows 10 with WSL2, Docker Desktop 4.2.0
Follow is the docker-compose.xml file
version: '3.7'
services:
frontend:
container_name: test-frontend
hostname: poirot_frontend
image: poirot_frontend
build:
context: .
dockerfile: ./apps/fwa/Dockerfile.angular
target: development
ports:
- 4200:4200
networks:
- poirot-network
depends_on:
- api
volumes:
- .:/usr/src
- /usr/src/node_modules
command: npm run start:app
api:
container_name: test-api
hostname: poirot_api
image: poirot_api
build:
context: .
dockerfile: ./apps/fwa-api/Dockerfile.api
target: development
volumes:
- .:/usr/src
- /usr/src/node_modules
ports:
- 3333:3333
- 9229:9229
command: npm run start:api
env_file:
- .env
networks:
- poirot-network
networks:
poirot-network:
driver: bridge
Dockerfile.angular
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist && \
npm install --only=development
COPY . .
RUN npm run build:app
#! this is the production image
FROM nginx:latest as production
COPY ./docker/angular.conf /etc/nginx/nginx.conf
COPY --from=development /usr/src/dist/apps/fwa /usr/share/nginx/html
Dockerfile.api
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist &&\
npm install --only=development
COPY . .
RUN npm run build:api
#! this is the production image
FROM node:14-alpine as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /app
COPY package*.json ./
RUN npm install minimist typescript ts-node lodash reflect-metadata tslib rxjs #nestjs/platform-express #types/bcrypt && \
npm install --only=production
COPY . .
COPY --from=development /usr/src/dist/apps/fwa-api ./dist
EXPOSE 3333
#! Migration runenr command: node_modules/ts-node/dist/bin.js migration-runner.ts
CMD ["node", "dist/main"]
Finally, I was able to make it work after a lot of trial and error.
For angular application, changed server command from npx nx serve to npx nx serve --host 0.0.0.0 --poll 2000.
For the Api, add "poll": 2000 option in angular.json at projects.api.architect.build.options
I have also updated Github repo for reference to anyone looking for the same solution.

docker-compose complains about missing ']'

I'm trying to get nodemon and docker-compose work with each others and I'm having a problem: Every time I run docker-compose up I get the following error: sh: missing ]. Here are the relevant files:
Dockerfile
FROM node:11.1.0-alpine
MAINTAINER TileHalo
WORKDIR /usr/src/app
RUN npm install -g nodemon
COPY package*.json ./
RUN npm install --save
COPY . .
EXPOSE 3000
CMD [ "nodemon", "./bin/www"]
and docker-compose.yml
version: "2"
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- postgres
volumes:
- .:/usr/src/app
postgres:
image: "postgres:alpine"
environment:
POSTGRES_PASSWORD: supersalainen
POSTGRES_USER: kipa
EDIT: Works on pure docker after the comma fix, but doesn't work on docker-compose

Docker runs of "pip install" and "npm install" on same container overwriting each other

In my docker container, I'm trying to install several packages with pip along with installing Bower via npm. It seems however that whichever of pip or npm run first, the other's contents in /usr/local/bin are overwritten (specifically, gunicorn is missing with the below Dockerfile, or Bower is missing if I swap the order of my FROM..RUN blocks).
Is this the expected behavior of Docker, and if so, how can I go about installing both my pip packages and Bower into the same directory, /usr/local/bin?
Here's my Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD ./requirements/ /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./ /code/
FROM node:0.12.7
RUN npm install bower
Here's my docker-compose.yml file:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
#-redis:redis
volumes:
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
webstatic:
restart: always
build: .
volumes:
- /usr/src/app/static
env_file: .env
command: bash -c "/code/manage.py bower install && /code/manage.py collectstatic --noinput"
nginx:
restart: always
#build: ./config/nginx
image: nginx
ports:
- "80:80"
volumes:
- /www/static
- config/nginx/conf.d:/etc/nginx/conf.d
volumes_from:
- webstatic
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
Update: I went ahead and cross-posted this as a docker-compose issue since it's unclear if there is an actual bug or if my configuration is a problem. I'll keep both posts updated, but do post in either if you have an idea of what is going on. Thanks!
You cannot use multiple FROM commands in Dockerfile and you cannot create image from 2 different base images. So if you need node and python in the same image you could either add node to python image or add python to node image.

Resources