I'm new to Rasa and Docker I want to deploy my rasa project in Docker. I can't find right flow for deployment. what I understood for deployment from blogs and docker videos and tried like this.
First step: I have to create a docker image contains project source and requirements.
Dockerfile
FROM rasa/rasa
COPY . /chatbot
WORKDIR /chatbot
RUN pip install -r requirements.txt
USER root
COPY ./actions /app/actions
USER 1001
requirements.txt
pyaml
flask
requests
spacy
rasa-nlu
rasa-core
rasa-core-sdk
Second step: Create Docker-compose.yml
version: "3.0"
services:
rasa:
image: rasa/rasa:2.6.3-full
ports:
- 5005:5005
volumes:
- ./:/app
command:
- run
- -m
- models
- --enable-api
- --cors
- "*"
- --debug
action_server:
image: rasa/rasa_core_sdk:latest
ports:
- 5055:5055
volumes:
- ./actions:/app/actions
command:
- rasa
- run
- actions
Can anyone tell me right flow for deployment.
Installed docker and docker-compose on your system.
Create the first Dockerfile (just a new file with name “Dockerfile”) in root directory of the project and should have this content -
FROM rasa/rasa:2.8.0
WORKDIR '/app'
COPY . /app
USER root
# WORKDIR /app
# COPY . /app
COPY ./data /app/data
RUN rasa train
VOLUME /app
VOLUME /app/data
VOLUME /app/models
CMD ["run","-m","/app/models","--enable-api","--cors","*","--debug" ,"--endpoints", "endpoints.yml", "--log-file", "out.log", "--debug"]
Note that you can change the rasa version here.
Create second Dockerfile in actions folder and place this content -
FROM rasa/rasa-sdk:2.8.0
WORKDIR /app
COPY requirements.txt requirements.txt
USER root
RUN pip install --verbose -r requirements.txt
EXPOSE 5055
USER 1001
Note that you have to put requirements.txt file containing python libraries that you installed and used in the actions folder.
So, That was all the dockerfiles we needed, now in the root directory, you can see a file called endpoints.yml. change the name localhost to action_server (we’ll register this name in the docker-compose file for container of action server) in the action_endpoint and it should look like -
action_endpoint:
url: "http://action_server:5055/webhook"
Finally, create a docker-compose.yml file in the root directory and place these content -
version: '3'
services:
rasa:
container_name: "rasa_server"
user: root
build:
context: .
volumes:
- "./:/app"
ports:
- "5005:5005"
action_server:
container_name: "action_server"
build:
context: actions
volumes:
- ./actions:/app/actions
- ./data:/app/data
ports:
- 5055:5055
After all this you can just run the command
docker-compose up --build
credits: https://forum.rasa.com/t/dockerizing-my-rasa-chatbot-application-that-has-botfront/46096/27
It looks correct to me. I followed the same structure, except the commands in action_server (inside docker-compose.yml) . I skipped the command part, it was working fine.
action-server:
image: rasa/rasa-sdk:1.10.2
volumes:
- ./actions:/app/actions
ports:
- 5055:5055
And in endpoints url, the name of the service i.e. 'action_server' will only come
That looks like a correct way to deploy your bot using docker. What is the issue you're facing?
Related
I am using Docker Toolbox on Windows 10 Home. I am not able to see the changes in my code on the docker-container.
My docker-compose.yml file looks like this
version: "3.7"
services:
flask:
build: ./flask
container_name: flask
restart: always
environment:
- APP_NAME=MyFlaskApp
expose:
- 8080
volumes:
- ./flask:/app
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
And my Dockerfile looks like this
FROM python:3.7.2-stretch
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install the dependencies
RUN pip install -r requirements.txt
# run the command to start uWSGI
CMD ["uwsgi", "app.ini"]
My folder structure is like this
-project
- flask
- app.ini
- Dockerfile
- requirements.txt
- run.py
-nginx
- Dockerfile
- nginx.conf
I'm pretty sure everything is in order here but I still can't make live changes to the server
Hello i tried to build docker-compose in my project with these structure file:
app/
-front-end/src/Components
-back-end/images
but when i run build i have these error with img relative url:
frontend_1 | Module not found: Can't resolve '../../../../../back-end/images'
And these is my docker-compose file:
version: '2'
services:
backend:
network_mode: host
build: ./back-end/
ports:
- "6200:6200"
volumes:
- ./back-end:/usr/src/app
frontend:
build: ./front-end/
ports:
- "3000:3000"
volumes:
- ./front-end:/usr/src/app
depends_on:
- backend
My frontend Dockerfile:
FROM node:10.15.3
RUN mkdir -p /usr/src/app
WORKDIR /TuKanasta
EXPOSE 3000
CMD ["npm", "start"]
the backend Dockerfile:
FROM node:10.15.3
RUN mkdir -p /usr/src/app
WORKDIR /TuKanasta
RUN npm install -g nodemon
EXPOSE 4000
CMD [ "npm", "start" ]
Note: My project run 100 % without docker.
volumes:
- ./back-end:/usr/src/app
...
volumes:
- ./front-end:/usr/src/app
If set in the same image, the second bind mount volume would overwrite the first /usr/src/app content, as illustrated in gladiusio/gladius-archive-node issue 4.
If set in two different images, /usr/src/app in frontend1 would not be able to see back-end, copied in /usr/src/app separate volume of backend service.
Declaring the volume as external might help, as illustrated in this thread.
Or copying into an existing volume (shown here)
I'm trying to containerize two services an socket service and a django application
My file structure is
\main file {docker-compose file}
\ django application {Dockerfile}
\ socket app {Dockerfile}
When I run docker build . it build the image
then when I run docker-compose build,
I notice that the socket app and django app are copied to the container instead of only the django application as specified by the Dockerfile.
I get the idea that the Dockerfile is executed in the main directory instead of the django directory?
Here is Dockerfile that is inside the django app application
# Pull base image
FROM python:3
# Set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
RUN ls
And here is the docker-compose file.
With the usage of the ls command I tried to figure out what happend and the output is that the applications in the main folder are copied instead of the django application.
version: '3'
services:
db:
image: postgres:10.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build: ./django_app
command: ls /code/
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
volumes:
postgres_data:
is this intended use or am I doing something wrong?
The volumes: directive in your docker-compose.yml file is hiding literally everything your Dockerfile does. You'll solve your immediate problem by changing the two directories to match: in the volumes: directive, bind-mount ./django_app:/code.
In a more production-oriented workflow, I'd recommend making your Docker image totally self-contained: make sure it has a CMD that runs your application, and do not use volumes: to inject your code. Delete command: and volumes: from the docker-compose.yml and let the image provide its own code and default command. (To do development, use a Python virtual environment for local code isolation, and make sure all of your tests and a basic hand-run workflow pass before using Docker for anything.)
I am trying to get webpack setup on my docker container. It is working, and running, but when I save on my local computer it is not updating my files in my container. I have the following docker-compose file:
version: '2'
services:
web:
build:
context: .
dockerfile: docker/web/Dockerfile
container_name: arc-bis-www-web
restart: on-failure:3
environment:
FPM_HOST: 'php'
ports:
- 8080:8080
volumes:
- ./app:/usr/local/src/app
php:
build:
context: .
dockerfile: docker/php/Dockerfile
environment:
CRM_HOST: '192.168.1.79'
CRM_NAME: 'ARC_test_8_8_17'
CRM_PORT: '1433'
CRM_USER: 'sa'
CRM_PASSWORD: 'Multi*Gr4in'
volumes:
- ./app:/usr/local/src/app
node:
build:
context: .
dockerfile: docker/node/Dockerfile
container_name: arc-bis-www-node
volumes:
- ./app:/usr/local/src/app
and my node container is run by the following dockerfile:
FROM node:8
RUN useradd --create-home user
RUN mkdir /usr/local/src/app
RUN mkdir /usr/local/src/app/src
RUN mkdir /usr/local/src/app/test
WORKDIR /usr/local/src/app
# Copy application source files
COPY ./app/package.json /usr/local/src/app/package.json
COPY ./app/.babelrc /usr/local/src/app/.babelrc
COPY ./app/webpack.config.js /usr/local/src/app/webpack.config.js
COPY ./app/test /usr/local/src/app/test
RUN chown -R user:user /usr/local/src/app
USER user
RUN npm install
ENTRYPOINT ["npm"]
Now I have taken out the copy calls from above and it still runs fine, but neither option is allowing me to save files locally and have them show up in the localhost for my container. Ideally, I thought having a volume would allow me to update my local files and have it read by the volume in the container. Does that make sense? I am still feeling my way around Docker. Thanks in advance for any help.
If you start your container with -v tag, you can map the container and your local storage. You can find more information here.
I'm trying to have one service to build my client side and then share it to the server using a named volume. Every time I do a docker-compose up --build I want the client side to build and update the named volume clientapp:. How do I do that?
docker-compose.yml
version: '2'
volumes:
clientapp:
services:
database:
image: mongo:3.4
volumes:
- /data/db
- /var/lib/mongodb
- /var/log/mongodb
client:
build: ./client
volumes:
- clientapp:/usr/src/app/client
server:
build: ./server
ports:
- "3000:3000"
environment:
- DB_1_PORT_27017_TCP_ADDR=database
volumes:
- clientapp:/usr/src/app/client
depends_on:
- client
- database
client Dockerfile
FROM node:6
ENV NPM_CONFIG_LOGLEVEL warn
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
# builds my application into /client
CMD ["npm", "build"]
By definition, a volume is the persistent directories that docker won't touch other than to perform an initial creation when they are empty. If this is your code, it probably shouldn't be a volume.
With that said, you can:
Delete the volume between runs with docker-compose down -v and it will be recreated and initialized on the next docker-compose up -d.
Change your container startup scripts to copy the files from some other directory in the image to the volume location on startup.
Get rid of the volume and include the code directly in the image.
I'd recommend the latter.
Imagine you shared your src folder like this :
...
volumes:
- ./my_src:/path/to/docker/src
...
What worked for me is to chown the my_src folder :
chown $USER:$USER -R my_src
It turned out some files were created by root and couldn't be modified by docker.
Hope it helps !