I am new to Openshift Container Platform. I am trying to deploy my application which uses node, redis and mongo. I have written a Dockerfile and docker-compose.yml. I am able to run it successfully in local system. Challenge I am facing is deploying in Openshift. Below are my Dockerfile and docker-compose.yml:
Dockerfile:
# Install node v10
FROM node:10.16.3
RUN apt update && apt install -y openjdk-8-jdk
# Set the workdir /var/www/myapp
WORKDIR /var/www/myapp
# Copy the package.json to workdir
COPY package.json .
# Run npm install - install the npm dependencies
RUN npm install
# Copy application source
COPY . .
# Copy .env.docker to workdir/.env - use the docker env
#COPY .env.docker ./.env
# Expose application ports - (4300 - for API and 4301 - for front end)
# EXPOSE 4300 4301
EXPOSE 52000
CMD node app.js
docker-compose.yml:
version: '3'
services:
myapp:
container_name: myapp
restart: always
build: .
ports:
- '52000:52000'
# - '8080:8080'
# - '4300:4300'
# - '4301:4301'
links:
- redis
- mongo
mongo:
container_name: myapp-mongo
image: 'mongo:4'
ports:
- '28107:28107'
# - '27017:27017'
redis:
container_name: myapp-redis
image: 'redis:4.0.11'
ports:
- '6379:6379'
You could use kompose (https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/) to translate your docker-compose resource to k8s manifests. I don't think you can deploy compose files directly without usage of other tools first.
You can use Docker Swarm. If you are more familiar with docker-compose,
Have a look it this it might help (https://docs.docker.com/engine/swarm/stack-deploy/).
Related
I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!
I am new in docker. I've built an application with VueJs2 that interacts with an external API. I would like to run the application on docker.
Here is my docker-compose.yml file
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
Here is my Dockerfile:
FROM node:14.17.0-alpine as develop-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN yarn install
COPY . .
EXPOSE 8080
CMD ["node"]
Here is the building command I run to build my image an container.
docker-compose up -d
The image and container is building without error but when I run the container it stops immediately. So the container is not running.
Are the DockerFile and compose files set correctly?
First of all you run npm install and yarn install, which is doing the same thing, just using different package managers. Secondly you are using CMD ["node"] which does not start your vue application, so there is no job running and docker is shutting down.
For vue applicaton you normally want to build the app with static assets and then run a simple http server to serve the static content.
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy 'package.json' to install dependencies
COPY package*.json ./
# install dependencies
RUN npm install
# copy files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Your docker-compose file could be as simple as
version: "3.7"
services:
vue-app:
build:
context: .
dockerfile: Dockerfile
container_name: vue-app
restart: always
ports:
- "8080:8080"
networks:
- vue-network
networks:
vue-network:
driver: bridge
to run the service from docker-compose use command property in you docker-compose.yml.
services:
vue-app:
command: >
sh -c "yarn serve"
I'm not sure about the problem but by using command: tail -f /dev/null in your docker-compose file , it will keep up your container so you could track the error within it and find its problem. You could do that by running docker exec -it <CONTAINER-NAME> bash and track the error logs in your container.
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
command: tail -f /dev/null
ports:
- '8080:8080'
In your Dockerfile you have to start your application e.g. npm run start or any other scripts that you are using for running your application in your package.json.
I have a docker-compose configuration for an MEAN app that's working fine.
I would like my angular (ng serve) and express servers (nodemon) to rerun automaticaly when I hit ctrl + s as if I was running my app in local.
For that, my containers need to be aware that the files changed.
How can I do that ?
Angular's Dockerfile :
FROM node:10
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 4200
CMD ["npm", "start"]
Express's Dockerfile :
FROM node:6
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml :
version: '3'
services:
angular: # name of the first service
build: client # specify the directory of the Dockerfile
ports:
- "4200:4200"
express: #name of the second service
build: server # specify the directory of the Dockerfile
ports:
- "3000:3000"
links:
- database
database: # name of the third service
image: mongo
ports:
- "27017:27017"
Both Angular and Express have an .dockerignore for node_modules
If you are in dev environment, you can add the volumes section in your docker-compose.yml as below :
services:
angular: # name of the first service
build: client # specify the directory of the Dockerfile
ports:
- "4200:4200"
volumes:
- /path/in/host/machine:/path/in/container
express: #name of the second service
build: server # specify the directory of the Dockerfile
ports:
- "3000:3000"
links:
- database
volumes:
- /path/in/host/machine:/path/in/container
database: # name of the third service
image: mongo
ports:
- "27017:27017"
Reference:
volumes in docker-compose
I keep getting errors that my modules don't exist when I'm running nodemon inside Docker and I save the node files. It takes a couple of saves before it throws the error. I have the volumes mounted like how the answers suggested here but I'm still getting the error and I'm not too sure what's causing it.
Here is my docker-compose.yml file.
version: "3.7"
services:
api:
container_name: api
build:
context: ./api
target: development
restart: on-failure
ports:
- "3000:3000"
- "9229:9229"
volumes:
- "./api:/home/node/app"
- "node_modules:/home/node/app/node_modules"
depends_on:
- db
networks:
- backend
db:
container_name: db
command: mongod --noauth --smallfiles
image: mongo
restart: on-failure
volumes:
- "mongo-data:/data/db"
- "./scripts:/scripts"
- "./data:/data/"
ports:
- "27017:27017"
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongo-data:
node_modules:
Here is my docker file:
# Ger current Node Alpine Linux image.
FROM node:alpine AS base
# Expose port 3000 for node.
EXPOSE 3000
# Set working directory.
WORKDIR /home/node/app
# Copy project content.
COPY package*.json ./
# Development environment.
FROM base AS development
# Set environment of node to development to trigger flag.
ENV NODE_ENV=development
# Express flag.
ENV DEBUG=app
# Run NPM install.
RUN npm install
# Copy source code.
COPY . /home/node/app
# Run the app.
CMD [ "npm", "start" ]
# Production environment.
FROM base AS production
# Set environment of node to production to trigger flag.
ENV NODE_ENV=production
# Run NPM install.
RUN npm install --only=production --no-optional && npm cache clean --force
# Copy source code.
COPY . /home/node/app
# Set user to node for better security.
USER node
# Run the app.
CMD [ "npm", "run", "start:prod" ]
Turns out I didn't put my .dockerignore in the proper folder. You're supposed to put it in the context folder.
I am using docker compose and i have created a volume. I have mulitple containers. I am facing issue to run commands in the docker container.
I have node js container which have separate frontend and backend folders. i need to run npm install in both the folders.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
node:
build:
context: ./node
volumes_from:
- applications
ports:
- "4000:30001"
networks:
- frontend
- backend
This is my docker file for node
FROM node:6.10
MAINTAINER JC Gil <sensukho#gmail.com>
ENV TERM=xterm
ADD script.sh /tmp/
RUN chmod 777 /tmp/script.sh
RUN apt-get update && apt-get install -y netcat-openbsd
WORKDIR /var/www/html/Backend
RUN npm install
EXPOSE 4000
CMD ["/bin/bash", "/tmp/script.sh"]
my workdir is empty as location /var/www/html/Backend is not available while building but available when i conainter is up. So my command npm install do not work
What you probably want to do, is to ADD or COPY the package.json file to the correct location, RUN npm install, then ADD or COPY the rest of the source into the image. That way, docker build will re-run npm install only when needed.
It would probably be better to run frontend and backend in separate containers, but if that's not an option, it's completely feasible to run ADD package.json-RUN npm install-ADD . once for each application.
The RUN is an image build step, at build time the volume isn't attached yet.
I think you have to execute npm install inside CMD.
You can try to add npm install inside /tmp/script.sh
Let me know
As Tomas Lycken Mentioned to copy files and then run npm install. I have separated containers for Frontend and backend. Most important is the node modules for the frontend and backend. Need to create them as volumes in services so that they are available when we up container.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
- ${BACKEND}:/var/www/html/Backend
- ${FRONTEND}:/var/www/html/Frontend
apache:
build:
context: ./apache2
volumes_from:
- applications
volumes:
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ./apache2/sites:/etc/apache2/sites-available
- /var/www/html/Frontend/node_modules
- /var/www/html/Frontend/bower_components
- /var/www/html/Frontend/dist
ports:
- "${APACHE_HOST_HTTP_PORT}:80"
- "${APACHE_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
node:
build:
context: ./node
ports:
- "4000:4000"
volumes_from:
- applications
volumes:
- /var/www/html/Backend/node_modules
networks:
- frontend
- backend