I have a docker-compose.yml file with the following content:
version: '2'
services:
MongoDB:
image: mongo
Parrot-API:
build: ./Parrot-API
image: sails-js:dev
volumes:
- "/user/Code/node/Parrot-API:/host"
command: bash -c "cd /host && sails lift"
links:
- MongoDB:MongoDB
ports:
- "3050:1337"
The file basically runs two containers: mongodb and web app (in directory ./Parrot-API) built in sails.js. However, when I run docker-compose up in the terminal, I got this error: Parrot-API_1 | bash: sails: command not found
node_Parrot-API_1 exited with code 127. Note that sails.js is a node.js web framework, and sails lift starts the app at port 1337.
I have done some google search and have found some similar questions, but not helpful in my case.
btw, I have the following Dockerfile in the Parrot-API folder:
FROM sails-js:dev
VOLUME /host
WORKDIR /host
RUN rm -rf node_modules && \
echo "hello world!" && \
pwd && \
ls -lrah
EXPOSE 1337
CMD npm install -g sails && npm install && sails lift
The file structure is following:
|- docker-compose.yml
|- Parrot-API/Dockerfile
|- Parrot-API/app.js, etc..
It is clear to me that the Parrot-API docker container exits immediately due to the reason that sails lift command is not executed, but how to make the container work? Thanks!
You showed a docker-compose.yml that builds a sails-js:dev image, and you showed a Dockerfile that is based on the sails-js:dev image. This appears to be recursive.
Your Dockerfile itself ends with a CMD in lieu of an ENTRYPOINT that does the npm install of sails. Since you did this as a CMD instead of a RUN, sails is not installed in your image, the install is launched on a container run, but only if you don't run the container with any arguments of your own, like you are doing in the docker-compose.yml with a custom command.
The fix is to update the Dockerfile with a proper base image and change the CMD to a RUN. I'm also seeing a few other mistakes like creating a volume and then modifying the contents, where volumes ignore other changes after they have been created. The FROM node is just a guess based on your npm commands, feel free to adjust:
FROM node
RUN mkdir -p /host && cd /host && npm install -g sails && npm install
EXPOSE 1337
WORKDIR /host
VOLUME /host
CMD sails lift
Related
I'm deploying an application with a Dockerfile and docker-compose. It loads a model from an AWS bucket to run the application. When the containers get restarted (not intentionally but because of the cloud provider), it loads again the model from AWS. What I would like to achive is storing the model on a persistent volume. In case of a restart, I would like to check whether the volume exists and is not empty and if so run a different docker-compose file which has a different bash command, not loading the model from AWS again.
This is part of my docker-compose.yml:
rasa-server:
image: rasa-bot:latest
working_dir: /app
build:
context: ./
dockerfile: Dockerfile
volumes:
- ./models /app/models
command: bash -c "rasa run --model model.tar.gz --remote-storage aws --endpoints endpoints.yml --credentials credentials.yml --enable-api --cors \"*\" --debug --port 5006"
In case of a restart the command would look like this:
command: bash -c "rasa run --model model.tar.gz --endpoints endpoints.yml --credentials credentials.yml --enable-api --cors \"*\" --debug --port 5006"
Note that this
--remote-storage aws
was removed.
This is my Dockerfile:
FROM python:3.7.7-stretch AS BASE
RUN apt-get update \
&& apt-get --assume-yes --no-install-recommends install \
build-essential \
curl \
git \
jq \
libgomp1 \
vim
WORKDIR /app
RUN pip install --no-cache-dir --upgrade pip
RUN pip install rasa==3.1
RUN pip3 install boto3
ADD . .
I know that I can use this:
docker volume ls
to list volumes. But I do not know how to wrap this in a if condition to check whether
- ./models /app/models
exists and is not empty and if it is not empty run a second docker-compose.yml which contains the second modified bash command.
I would accomplish this by making the main container command actually be a script that looks to see if the file exists and optionally fills in the command line argument.
#!/bin/sh
MODEL_EXISTS=$(test -f /app/models/model.tar.gz && echo yes)
exec rasa run \
--model model.tar.gz \
${MODEL_EXISTS:---remote-storage aws} \
--endpoints endpoints.yml \
...
The first line uses the test(1) shell command to see if the file already exists, and sets the variable MODEL_EXISTS to yes if it exists and empty if it does not. Then in the command, there is a shell parameter expansion: if the variable MODEL_EXISTS is unset or empty :- then expand and split the text --remote-storage aws. (This approach inspired by BashFAQ/050.)
In your Dockerfile, COPY this script into your image and make it be the default CMD. It needs to be executable like any other script (run chmod +x on your host and commit that change to source control); since it is executable and begins with a "shebang" line, you do not need to explicitly name the shell when you run it.
...
COPY rasa-run ./
CMD ["./rasa-run"]
In your Compose file, you do not need to override the command:, change the working_dir: from what the Dockerfile sets, or change from a couple of Compose-provided defaults. You should be able to reduce this to
version: '3.8'
services:
rasa-server:
build: .
volumes:
- ./models:/app/models
More generally for this class of question, I might suggest:
Prefer setting a Dockerfile default CMD to a Compose command: override; and
Write out non-trivial logic in a script and run that script as the main container command; don't write complicated conditionals in an inline CMD or command:.
You could have an if statement in your bash command to use AWS or not depending on the result you get from docker volume ls
using -f name= you can filter based on the volume name and then you can check if it's not null and run a different command.
Note that this command is just an example and I have no idea if it works or not as I don't use bash everyday.
command: bash -c "
VOLUME = docker volume ls -f name=FOO
if [ -z "$VOLUME" ];
then
rasa run --model model.tar.gz --remote-storage aws --endpoints endpoints.yml --credentials credentials.yml --enable-api --cors \"*\" --debug --port 5006
else
rasa run --model model.tar.gz --endpoints endpoints.yml --credentials credentials.yml --enable-api --cors \"*\" --debug --port 5006
fi
"
I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!
I want to start a Python container dependent on a database container. But I would like the Python container to start only after the sql server container has fully executed. I built this docker-compose.yml file ...
version: "3.2"
services:
sql-server-db:
restart: always
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
env_file: /Users/davea/my_project/api/tests/.test_env
ports:
- 3900:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=password
- DB_HOST=0.0.0.0
- DB_NAME=my_db
- DB_USER=SA
- DB_PASS=password
volumes:
- ../../CloudDB/CloudDB:/sqlscripts
python:
restart: always
build: ../
environment:
DEBUG: 'true'
volumes:
- /Users/davea/my_project/api:/my-app
depends_on:
- sql-server-db
Below is my Dockerfile for the sql server container ...
FROM microsoft/mssql-server-linux:latest
RUN apt-get update
RUN apt-get install unzip -y
RUN apt-get install tzdata
ENV TZ=America/New_York
RUN ln -fs /usr/share/zoneinfo/$TZ /etc/localtime && dpkg-reconfigure -f noninteractive tzdata
RUN date
RUN echo "========="
# Install sqlpackage, needed for deplying dacpac file
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=873926 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
# Create work directory
RUN mkdir -p /usr/work
WORKDIR /usr/work
# Copy all SQL scripts into working directory
COPY . /usr/work/
# Grant permissions for the import-data script to be executable
RUN chmod +x /usr/work/import-data.sh
RUN pwd
CMD /bin/bash ./entrypoint.sh
but I'm noticing something strange. The SQL server container does not seem to be fully executing all the commands in the entrypoint.sh file. I see this output ...
...
Removing intermediate container 72550d896ede
---> ae6b93ca884e
Step 14/15 : RUN pwd
---> Running in f229ef6fec4c
/usr/work
Removing intermediate container f229ef6fec4c
---> 7758242bbd95
Step 15/15 : CMD /bin/bash ./entrypoint.sh
---> Running in 76fa5c8308e3
Removing intermediate container 76fa5c8308e3
---> 567633ad757f
Successfully built 567633ad757f
Successfully tagged microsoft/mssql-server-linux:2017-latest
WARNING: Image for service sql-server-db was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building python
Step 1/17 : FROM python:3.8-slim
Below are the contents of the entrypoint.sh file. Is there another way I can structure things so taht the commands are executed? I'm noticing the Python container doesn't seem to recognize the SQL server container either.
#!/bin/bash -l
/usr/work/import-data.sh & /opt/mssql/bin/sqlservr
Is there somethign else I need to do to get the shell script from my sql server container to fully execute?
Your usage of depends_on option is incorrect, or perhaps not working the way in which you intend it to work.
See: Documentation of depends_on. It clearly state it does not wait for the database to be ready in case of sql servers.
Depends_on implies only to wait until the service is up and running.
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
You will benefit to create some sort of manual "wait-for-it" script (as seen in this docker-compose example) before starting python container.
I'm using an Apache / MySql Docker-compose set up which is all good. However the issue comes when, as this is for local development, the web container points to a local folder, for which I need Apache to have permissions to.
Using
RUN mkdir /www \
&& chown -R apache:apache /www
VOLUME ["/www"]
is fine if I run the Apache dockerfile by itself or if I run it in docker-compose without specifying a volume. But this means that I can't point that volume at a local directory, in this scenario "www" exists inside the container but doesn't map to the host machine. If I specify a volume inside the docker-compose file then it maps as expected but doesn't allow me to CHOWN the folder / files (even if I exec into the container)
Below is a proof of concept, I'm running on Windows 10 / Docker Desktop Community Version 2.0.0.0-win81 (29211)
EDIT (commented exposing the port, built the dockerfile from docker-compose and changed the port to 80 from 81)
EDIT (I've updated the following files, see bottom, I'm leaving these for posterity)
docker-compose.yml
version: '3.2'
services:
web:
restart: always
build:
context: .
ports:
- 80:80
volumes:
- ./:/www
Dockerfile
FROM centos:centos6 as stage1
RUN yum -y update && yum clean all \
&& yum --setopt=tsflags=nodocs install -y yum-utils \
httpd \
php
FROM stage1 as stage2
RUN mkdir /www \
&& chown -R apache:apache /www
#VOLUME ["/www"]
#EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
UPDATED Proof of concept files
Docker-compose.yml
version: '3.2'
services:
web:
build:
context: .
ports:
- 80:80
volumes:
- ./:/www
Dockerfile
FROM centos:centos6
RUN yum -y update && yum clean all \
&& yum --setopt=tsflags=nodocs install -y yum-utils \
httpd \
php
COPY ./entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/bash
set -e #exit straight away if there's an issue
chown -R apache:apache /www
# Apache
/usr/sbin/httpd -D FOREGROUND
Docker for Windows uses a CIFS/Samba network file share to bind-mount host files into the Linux VM running docker. That is always done as root:root so all bind-mount files/dirs will always show that when seen from inside container. This is a known limitation of the way docker shares these files between the OS's.
Workarounds:
In many cases, this isn't an issue. The host files are shared into the container world-readable, so local app development while running in the container is fine. For cache files, user uploads, etc. just be sure they are written into a container path that isn't to the host-bind mount, so they stay in Linux where you can control the perms.
If needed, for development only, run the app in the container as root if it needs write permissions to host OS files. You can override this at runtime: e.g. docker run -u root or user:root in docker-compose.yml
For working with database files, don't bind-mount them, but use named volumes to keep the files in the Linux VM. You can always use docker cp to copy files in and out of volumes for a quick backup.
You're using
RUN mkdir /www \
&& chown -R apache:apache /www
Prior to docker-compose mapping the local . directory to www.
You need to create a file entrypoint.sh or similar. Give it a shbang. And inside that you should run chown -R apache:apache /www. You do not need the mkdir as that's created by docker compose volume config ./:/www.
After that command in your entrypoint.sh file you should add in what you currently have for your entrypoint /usr/sbin/httpd -D FOREGROUND.
Then finally you of course need to set your new entrypoint to use the entrypoint.sh file ENTRYPOINT ["/entrypoint.sh"]
I tried to make a simple application with Yesod and PostgreSQL using Docker Compose but RUN yesod init -n myApp -d postgresql didn't seem to work as expected.
I defined Dockerfile and docker-compose.yml as below:
Dockerfile:
FROM shuny/ghc-7.8.4:latest
MAINTAINER shuny
# Create default config
RUN cabal update
# Add stackage remote repo
RUN sed -i 's/^remote-repo: [a-zA-Z0-9_\/:.]*$/remote-repo: stackage:http:\/\/www.stackage.org\/lts/g' /root/.cabal/config
# Update packages
RUN cabal update
# Generate locale otherwise happy (because of tf-random) will fail
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN echo $LANG
# Install build tools for yesod
RUN cabal install alex happy yesod-bin
# Install library for yesod-postgres
RUN apt-get update && apt-get install -y libpq-dev
RUN mkdir /code
WORKDIR /code
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
ADD . /code
WORKDIR /code
# ADD settings.yml /code/myApp/config/
docker-compose.yml:
database:
image: postgres
ports:
- "5432"
web:
build: .
tty: true
command: yesod devel
volumes:
- .:/code/
ports:
- "3000:3000"
links:
- database
and docker-compose build returned as below:
Step 0 : FROM shuny/ghc-7.8.4:latest
...
Step 17 : WORKDIR /code
---> Running in bf99d0aca48c
---> 37c3c94338d7
Removing intermediate container bf99d0aca48c
Successfully built 37c3c94338d7
but when I check like this:
$docker-compose run web /bin/bash
root#0fe5fb1a3b20:/code# ls
root#0fe5fb1a3b20:/code#
it showed nothing while this commands seem to work as expected:
docker run -ti 37c3c94338d7
root#31e94428de37:/code# ls
docker-compose.yml Dockerfile myApp settings.yml
root#31e94428de37:/code# ls myApp/
app config Handler Model.hs Settings.hs test
Application.hs dist Import myApp.cabal static
cabal.sandbox.config Foundation.hs Import.hs Settings templates
How can I fix it?
I really appliciate any feedback, thank you.
You are doing strange things with volumes and the ADD instruction.
First you build your application inside the image:
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
Then you add the content of the folder that contains the Dockerfile in the /code folder of the image. I guess this step is useless.
ADD . /code
Then, if you run a container without -volume option, everything works fine
docker run -ti 37c3c94338d7
But in your docker-compose.yml file, you specified a volume option that overides the /code folder in the container with the folder that contains the docker-compose.yml file on the host machine. Therefore, you don't have the content generated during the build of your image anymore.
There are two possibilities:
Don't use the volume instruction in the docker-compose.yml file
Put the content of the /code/myApp/ folder of the image inside the ./myApp folder of the host.
It depends on why you want to use the volume option in docker-compose.yml.
I don't really know what is your goal. But if what you are trying to do is to access to the files built inside the container from the host machine, maybe this should do what you are looking for:
Remove the build steps from your Dockerfile
Run a shell inside a "web" container: docker-compose run web bash
Launch the build commands
So you will have built your application while the volume was mounted and will see the files on the host machine.
Exit the shell
Launch Docker Compose normally
If you just want to be able to backup the content of the /code/myApp/ folder, maybe you should omit the path on the host machine from the volume section of docker-compose.yml.
volumes:
- /code/
And follow this section of the documentation
I hope it helps