I am using docker compose and i have created a volume. I have mulitple containers. I am facing issue to run commands in the docker container.
I have node js container which have separate frontend and backend folders. i need to run npm install in both the folders.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
node:
build:
context: ./node
volumes_from:
- applications
ports:
- "4000:30001"
networks:
- frontend
- backend
This is my docker file for node
FROM node:6.10
MAINTAINER JC Gil <sensukho#gmail.com>
ENV TERM=xterm
ADD script.sh /tmp/
RUN chmod 777 /tmp/script.sh
RUN apt-get update && apt-get install -y netcat-openbsd
WORKDIR /var/www/html/Backend
RUN npm install
EXPOSE 4000
CMD ["/bin/bash", "/tmp/script.sh"]
my workdir is empty as location /var/www/html/Backend is not available while building but available when i conainter is up. So my command npm install do not work
What you probably want to do, is to ADD or COPY the package.json file to the correct location, RUN npm install, then ADD or COPY the rest of the source into the image. That way, docker build will re-run npm install only when needed.
It would probably be better to run frontend and backend in separate containers, but if that's not an option, it's completely feasible to run ADD package.json-RUN npm install-ADD . once for each application.
The RUN is an image build step, at build time the volume isn't attached yet.
I think you have to execute npm install inside CMD.
You can try to add npm install inside /tmp/script.sh
Let me know
As Tomas Lycken Mentioned to copy files and then run npm install. I have separated containers for Frontend and backend. Most important is the node modules for the frontend and backend. Need to create them as volumes in services so that they are available when we up container.
version: '2'
services:
### Applications Code Container #############################
applications:
image: tianon/true
volumes:
- ${APPLICATION}:/var/www/html
- ${BACKEND}:/var/www/html/Backend
- ${FRONTEND}:/var/www/html/Frontend
apache:
build:
context: ./apache2
volumes_from:
- applications
volumes:
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ./apache2/sites:/etc/apache2/sites-available
- /var/www/html/Frontend/node_modules
- /var/www/html/Frontend/bower_components
- /var/www/html/Frontend/dist
ports:
- "${APACHE_HOST_HTTP_PORT}:80"
- "${APACHE_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
node:
build:
context: ./node
ports:
- "4000:4000"
volumes_from:
- applications
volumes:
- /var/www/html/Backend/node_modules
networks:
- frontend
- backend
Related
I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!
I am new in docker. I've built an application with VueJs2 that interacts with an external API. I would like to run the application on docker.
Here is my docker-compose.yml file
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
Here is my Dockerfile:
FROM node:14.17.0-alpine as develop-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN yarn install
COPY . .
EXPOSE 8080
CMD ["node"]
Here is the building command I run to build my image an container.
docker-compose up -d
The image and container is building without error but when I run the container it stops immediately. So the container is not running.
Are the DockerFile and compose files set correctly?
First of all you run npm install and yarn install, which is doing the same thing, just using different package managers. Secondly you are using CMD ["node"] which does not start your vue application, so there is no job running and docker is shutting down.
For vue applicaton you normally want to build the app with static assets and then run a simple http server to serve the static content.
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy 'package.json' to install dependencies
COPY package*.json ./
# install dependencies
RUN npm install
# copy files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Your docker-compose file could be as simple as
version: "3.7"
services:
vue-app:
build:
context: .
dockerfile: Dockerfile
container_name: vue-app
restart: always
ports:
- "8080:8080"
networks:
- vue-network
networks:
vue-network:
driver: bridge
to run the service from docker-compose use command property in you docker-compose.yml.
services:
vue-app:
command: >
sh -c "yarn serve"
I'm not sure about the problem but by using command: tail -f /dev/null in your docker-compose file , it will keep up your container so you could track the error within it and find its problem. You could do that by running docker exec -it <CONTAINER-NAME> bash and track the error logs in your container.
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
command: tail -f /dev/null
ports:
- '8080:8080'
In your Dockerfile you have to start your application e.g. npm run start or any other scripts that you are using for running your application in your package.json.
I'm new to Rasa and Docker I want to deploy my rasa project in Docker. I can't find right flow for deployment. what I understood for deployment from blogs and docker videos and tried like this.
First step: I have to create a docker image contains project source and requirements.
Dockerfile
FROM rasa/rasa
COPY . /chatbot
WORKDIR /chatbot
RUN pip install -r requirements.txt
USER root
COPY ./actions /app/actions
USER 1001
requirements.txt
pyaml
flask
requests
spacy
rasa-nlu
rasa-core
rasa-core-sdk
Second step: Create Docker-compose.yml
version: "3.0"
services:
rasa:
image: rasa/rasa:2.6.3-full
ports:
- 5005:5005
volumes:
- ./:/app
command:
- run
- -m
- models
- --enable-api
- --cors
- "*"
- --debug
action_server:
image: rasa/rasa_core_sdk:latest
ports:
- 5055:5055
volumes:
- ./actions:/app/actions
command:
- rasa
- run
- actions
Can anyone tell me right flow for deployment.
Installed docker and docker-compose on your system.
Create the first Dockerfile (just a new file with name “Dockerfile”) in root directory of the project and should have this content -
FROM rasa/rasa:2.8.0
WORKDIR '/app'
COPY . /app
USER root
# WORKDIR /app
# COPY . /app
COPY ./data /app/data
RUN rasa train
VOLUME /app
VOLUME /app/data
VOLUME /app/models
CMD ["run","-m","/app/models","--enable-api","--cors","*","--debug" ,"--endpoints", "endpoints.yml", "--log-file", "out.log", "--debug"]
Note that you can change the rasa version here.
Create second Dockerfile in actions folder and place this content -
FROM rasa/rasa-sdk:2.8.0
WORKDIR /app
COPY requirements.txt requirements.txt
USER root
RUN pip install --verbose -r requirements.txt
EXPOSE 5055
USER 1001
Note that you have to put requirements.txt file containing python libraries that you installed and used in the actions folder.
So, That was all the dockerfiles we needed, now in the root directory, you can see a file called endpoints.yml. change the name localhost to action_server (we’ll register this name in the docker-compose file for container of action server) in the action_endpoint and it should look like -
action_endpoint:
url: "http://action_server:5055/webhook"
Finally, create a docker-compose.yml file in the root directory and place these content -
version: '3'
services:
rasa:
container_name: "rasa_server"
user: root
build:
context: .
volumes:
- "./:/app"
ports:
- "5005:5005"
action_server:
container_name: "action_server"
build:
context: actions
volumes:
- ./actions:/app/actions
- ./data:/app/data
ports:
- 5055:5055
After all this you can just run the command
docker-compose up --build
credits: https://forum.rasa.com/t/dockerizing-my-rasa-chatbot-application-that-has-botfront/46096/27
It looks correct to me. I followed the same structure, except the commands in action_server (inside docker-compose.yml) . I skipped the command part, it was working fine.
action-server:
image: rasa/rasa-sdk:1.10.2
volumes:
- ./actions:/app/actions
ports:
- 5055:5055
And in endpoints url, the name of the service i.e. 'action_server' will only come
That looks like a correct way to deploy your bot using docker. What is the issue you're facing?
I am new to Openshift Container Platform. I am trying to deploy my application which uses node, redis and mongo. I have written a Dockerfile and docker-compose.yml. I am able to run it successfully in local system. Challenge I am facing is deploying in Openshift. Below are my Dockerfile and docker-compose.yml:
Dockerfile:
# Install node v10
FROM node:10.16.3
RUN apt update && apt install -y openjdk-8-jdk
# Set the workdir /var/www/myapp
WORKDIR /var/www/myapp
# Copy the package.json to workdir
COPY package.json .
# Run npm install - install the npm dependencies
RUN npm install
# Copy application source
COPY . .
# Copy .env.docker to workdir/.env - use the docker env
#COPY .env.docker ./.env
# Expose application ports - (4300 - for API and 4301 - for front end)
# EXPOSE 4300 4301
EXPOSE 52000
CMD node app.js
docker-compose.yml:
version: '3'
services:
myapp:
container_name: myapp
restart: always
build: .
ports:
- '52000:52000'
# - '8080:8080'
# - '4300:4300'
# - '4301:4301'
links:
- redis
- mongo
mongo:
container_name: myapp-mongo
image: 'mongo:4'
ports:
- '28107:28107'
# - '27017:27017'
redis:
container_name: myapp-redis
image: 'redis:4.0.11'
ports:
- '6379:6379'
You could use kompose (https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/) to translate your docker-compose resource to k8s manifests. I don't think you can deploy compose files directly without usage of other tools first.
You can use Docker Swarm. If you are more familiar with docker-compose,
Have a look it this it might help (https://docs.docker.com/engine/swarm/stack-deploy/).
I am trying to get webpack setup on my docker container. It is working, and running, but when I save on my local computer it is not updating my files in my container. I have the following docker-compose file:
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
container_name: arc-bis-www-web
restart: on-failure:3
environment:
FPM_HOST: 'php'
ports:
- 8080:8080
volumes:
- ./app:/usr/local/src/app
php:
build:
context: .
dockerfile: docker/php/Dockerfile
environment:
CRM_HOST: '192.168.1.79'
CRM_NAME: 'ARC_test_8_8_17'
CRM_PORT: '1433'
CRM_USER: 'sa'
CRM_PASSWORD: 'Multi*Gr4in'
volumes:
- ./app:/usr/local/src/app
node:
build:
context: .
dockerfile: docker/node/Dockerfile
container_name: arc-bis-www-node
volumes:
- ./app:/usr/local/src/app
and my node container is run by the following dockerfile:
FROM node:8
RUN useradd --create-home user
RUN mkdir /usr/local/src/app
RUN mkdir /usr/local/src/app/src
RUN mkdir /usr/local/src/app/test
WORKDIR /usr/local/src/app
# Copy application source files
COPY ./app/package.json /usr/local/src/app/package.json
COPY ./app/.babelrc /usr/local/src/app/.babelrc
COPY ./app/webpack.config.js /usr/local/src/app/webpack.config.js
COPY ./app/test /usr/local/src/app/test
RUN chown -R user:user /usr/local/src/app
USER user
RUN npm install
ENTRYPOINT ["npm"]
Now I have taken out the copy calls from above and it still runs fine, but neither option is allowing me to save files locally and have them show up in the localhost for my container. Ideally, I thought having a volume would allow me to update my local files and have it read by the volume in the container. Does that make sense? I am still feeling my way around Docker. Thanks in advance for any help.
If you start your container with -v tag, you can map the container and your local storage. You can find more information here.