How to fix docker container exiting with code 6? - docker

Context
I am running a dockerized ubuntu for a Meteor (js) application with docker-compose file on my personnal computer. It is attached to a mongo DB container. And a nginx proxy container is running to make my development url secured with ssl (https).
Docker Images
ubuntu : ubuntu:18.04
nginx proxy : jwilder/nginx-proxy:alpine
mongo : mongo:latest
Plus my own meteor app locally.
Other
Meteor version : 1.8.1
Docker version : 18.09.3, build 774a1f4
Docker Compose version : 1.23.2, build 1110ad01
Problem
Since I stopped my containers with docker-compose down, when I restart them, my webapp container exits (with exit code 6) all the time. I didn't change the docker-compose files since my previous docker-compose -f docker-compose.dev.yml --verbose up
Error displayed by docker-compose -f docker-compose.dev.yml --verbose up
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e')
compose.cli.verbose_proxy.proxy_callable: docker wait <- ('26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e/json HTTP/1.1" 200 None
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e/wait HTTP/1.1" 200 30
compose.cli.verbose_proxy.proxy_callable: docker wait -> {'Error': None, 'StatusCode': 6}
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': 'docker-default',
'Args': [],
'Config': {'ArgsEscaped': True,
'AttachStderr': False,
'AttachStdin': False,
'AttachStdout': False,
'Cmd': ['meteor'],
'Domainname': '',
'Entrypoint': None,
'Env': ['ENV_APP_SERVER_USERNAME=app',
...webappContainer exited with code 6
Result of docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26d90365fe9a xxxxxxxxxx_webapp "meteor" 23 minutes ago Exited (6) 13 seconds ago webappContainer
4073bbfe37cf mongo "docker-entrypoint.s…" 39 minutes ago Up 14 seconds 0.0.0.0:27017->27017/tcp mongoDBContainer
201f0a99d1cf jwilder/nginx-proxy:alpine "/app/docker-entrypo…" 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy_nginx-proxy_1
Result of docker logs <container_id>
** none **
How ? The container will not even start. When I do docker run it works..
Source files
docker-compose.dev.yml
version: '3'
services:
webapp:
container_name: webappContainer
env_file: .env
environment:
VIRTUAL_HOST: mysite.local
VIRTUAL_PORT: ${PORT}
build:
context: ${CONTEXT}
dockerfile: ${DOCKERFILE}...
volumes:
- ${VOLUME}:/usr/src/app
expose:
- ${PORT}
networks:
- dbAppConnector
- default
depends_on:
- mongodb
mongodb:...
container_name: ${MONGO_CONTAINER_NAME}
image: mongo
restartmongodb: always
env_file: .env
ports:
- "${MONGO_PORT}:${MONGO_PORT}"..
networks:
dbAppConnector:
volumes:
- mongo_volume:/data/db
- ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
volumes:
mongo_volume:
networks:
default:
external:
name: nginx-proxy
dbAppConnector:
Note that when i do docker-compose config every .env variables are correctly displayed.
Dockerfile.dev
FROM ubuntu:18.04
# Set environment dir & copy files
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
# Make host.docker.internal accessible from outside the container (dev only)
# RUN ip -4 route list match 0/0 | awk '{print $3 "host.docker.internal"}' >> /etc/hosts
# Add user to run localy (dev only) the app
RUN addgroup meteoruser && useradd -rm -d /home/moff -s /bin/bash -g root -G sudo -u 1000 meteoruser
# Update distribution
RUN apt-get update -q && apt-get clean \
# Install curl
&& apt-get install curl -y \
# Install Meteor
&& (curl https://install.meteor.com | sh) \
# Install node js
&& cd /usr/src/app \
# replace vx.x.x by meteor node -e 'console.log("I am Node.js %s!", process.version);' output from my project folder.
&& bash -c 'curl "https://nodejs.org/dist/v8.15.1/node-v8.15.1-linux-x64.tar.gz" > /usr/src/app/required-node-linux-x64.tar.gz' \
&& cd /usr/local && tar --strip-components 1 -xzf /usr/src/app/required-node-linux-x64.tar.gz \
&& rm /usr/src/app/required-node-linux-x64.tar.gz \
&& cd /usr/src/app \
&& npm install
RUN cd /usr/src/app && chown -Rh meteoruser .meteor/local
EXPOSE 80
ENV PORT 80
WORKDIR /usr/src/app
USER meteoruser
CMD ["meteor"]
What I currently tried to fix the problem
I Stopped and removed all containers, removed all images, removed all networks, and rebuild everything.
I Checked my internet access (average ping: ~35ms).
I Tried different version of "Meteor js" (1.8.1, latest, 1.8.0.2) with the flag ?release=1.8.0.2 in the Dockerfile.dev part where I do && (curl https://install.meteor.com | sh) \.
I Tried to find some documentation about code 6 when a container is exiting but found nothing relevant.
It tried to look at logs but none were available for the exited app container xxxxxx_webapp : result empty.
I tried to start separately the xxxxxx_webapp container and had a weird result with docker run xxxxxxx_webapp.
This is your first time using Meteor!
Installing a Meteor distribution in your home directory.
Downloading Meteor distribution
It took 2+ minutes to finally show :
Retrying download in 5 seconds …
Retrying download in 5 seconds …
Retrying download in 5 seconds …
Retrying download in 5 seconds …
[[[[[ /usr/src/app ]]]]]
=> Started proxy.
=> Started MongoDB.
And so the container is running without exiting with the separate docker run command..
Reminder: everything was working fine for 4+ weeks, then suddenly I suspect my new internet connection being too slow (I moved from my parents house 1 week ago).
Could you guys please give some advises ? Is there any additional information I could provide to help you understand what I am missing ? Thanks in advance!
Edit
Other tries to make things work:
I removed and reinstall everything from docker.
I tried a previous version of the app, 3 weeks ago that was fully working.
I Tried everything except using another wi-fi. Could this works? I don't think so but it there is always a hope.

Though I never found out what exit code 6 is either, I did find it's cause in my setup.
Turns out my environment was invalid because I had accidentally set it to a branch that didn't exist.
docker-compose --verbose up set me on the right path

TL;DR: If your use-case involves a deleted directory that's part of a volume, delete the docker volume, restart docker, then run compose up again
I was having the same issue, and it seemed related to one of my volumes.
I had the following definition in my docker-compose file:
volumename:
driver: local
driver_opts:
o: bind
type: none
device: "${PWD}/data/some/dir"
I had deleted and re-created the "${PWD}/data/some/dir" directory, and then started getting exit code 6 for the container that depended on the volume.
Things I tried:
Restart docker
Delete the volume (using the Docker Desktop UI) - this seemed to do the trick

Exit code 6 is thrown when you are pointing to the wrong branch. Check if you are pointing to a valid branch or not: docker-compose --verbose should solve your issue.
Also, I would suggest you to check the volume section in your code once.

SOLUTION
I uninstalled e-v-e-r-y-t-h-i-n-g from docker on my computer like this :
Warning : Be aware that with this procedure you will lose your configuration, images, and containers.
First I removed unused data from the 'system' with :
sudo docker system prune
Then, removed all networks and volumes :
sudo docker volume prune
sudo docker network prune
Then, a complete uninstall of docker was necessary :
apt-get remove --purge docker*
then I had to remove the docker containers and images :
rm -rf /var/lib/docker
then, remove config files, ..
rm -rf /etc/docker
rm /etc/systemd/system/docker.service
rm /etc/init/d/docker
Then removed osbolete and orphan packages:
apt-get autoremove && apt-get autoclean
Then update and upgrade existing packages
apt-get update && apt-get upgrade
Finally reinstalled docker and docker-compose from :
Docker install procedure : https://linuxize.com/post/how-to-install-and-use-docker-on-ubuntu-18-04/
Docker-compose install procedure : https://linuxize.com/post/how-to-install-and-use-docker-compose-on-ubuntu-18-04/
And at the and I redownloaded from git my project, reinstalled it with docker-compose up -d :)
So, I believe this was a config problem with docker. Unfortunately I never found a clear answer on that "exit code 6" ..
Feel free to share your solution if you ever encounter the same "exit code 6" problem guys.

Related

docker-compose not producting "No Such File or Directory" when files exist in container

I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!

Can't access docker application from localhost using docker-compose

before I start: I have searched this question already and implemented what the "Solutions" were and it did not help (setting host to 0.0.0.0). So, with that out of the way,
Directory structure
|-- osr
| |-- __init__.py
|-- requirements.txt
|-- Dockerfile
|-- docker-compose.yml
Dockerfile:
FROM python:3.7.5-buster
EXPOSE 5000 # i have tried with and without this
ENV INSTALL_PATH /osr
ENV FLASK_APP osr
ENV FLASK_ENV development
RUN mkdir -p $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD [ "flask", "run", "--host=0.0.0.0"]
docker-compose.yml:
version: '3.8'
services:
web:
build: .
ports:
- '5000:5000'
expose:
- '5000'
volumes:
- .:/osr
__ init __.py
import os
from flask import Flask
def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev'
)
#app.route('/hello')
def hello():
return 'Hello, World!'
return app
docker-compose build web
docker-compose run web
* Serving Flask app "osr" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 225-441-434
all of the below return "hmm can't reach this page"
http://localhost
http://localhost:5000/hello
http://127.0.0.1:5000/hello
I've even tried going to the containers IP with
docker exec -it 49e677 bash
ip add | grep global
inet 172.21.0.2/16 brd 172.21.255.255 scope global eth0
http://127.21.0.2:5000/hello # nothing
Nothing.
I'm sure the app itself works fine as I can simply run the app directly.
$env:FLASK_APP="osr"
$env:FLASK_ENV="development"
flask run --host=0.0.0.0
And it runs fine
EDIT: UPDATE
I am actually able to get to the container when I run it through simply the Dockerfile... using
docker run -it -p 5000:5000 osr_web # the container built by docker-compose build
With this, I am able to access the endpoint through localhost:5000/hello
So the issue appears to lie in spinning it up through docker-compose run
Does this help at all?
UPDATE 2
I have discovered that when I run docker ps -a I can see that Docker run actually exposes the port, but docker-compose run does not:
Are you sure app works fine itself? I tried to run your python __init__.py file and ended up with an error.
python osr/__init__.py
File "osr/__init__.py", line 11
def hello():
^
IndentationError: unexpected indent
It works after fixing the indentation error.
#app.route('/hello')
def hello():
return 'Hello, World!'
$ docker run -d -p 5000:5000 harik8/osr:latest
76628f86fecb61c0be4a969d3c91c5c575702ad8063b594a6c1c90b498ea25f1
$ curl http://127.0.0.1:5000/hello
Hello, World!
You can't run both docker and docker-compose in port 5000 at the same time. Either run one at a time or change the docker-compose/dockerfile host port.
$ docker ps -a | grep osr
8b885c4a9654 harik8/osr:latest "flask run --host=0.…" 12 seconds ago Up 11 seconds 0.0.0.0:5000->5000/tcp
$ docker ps -a | grep q5
70f38bf11e26 q5_web "flask run --host=0.…" About a minute ago Up 10 seconds 0.0.0.0:5001->5000/tcp
$ docker ps -a | grep q5
f9f6ba999109 q5_web "flask run --host=0.…" 5 minutes ago Up 5 minutes 0.0.0.0:5000->5000/tcp q5_web_1
$ docker ps -a | grep osr
93fb421333e4 harik8/osr:latest "flask run --host=0.…" 18 seconds ago Up 18 seconds 5000/tcp
I found the issue. For reference, I am running these versions:
docker-compose version 1.25.5, build 8a1c60f6
Docker version 19.03.8, build afacb8b
There were a couple issues: First and foremost to get the ports exposed I needed to run one of two options.
docker-compose up web
or
docker-compose run --service-ports web
Simply running docker-compose run web would not expose the ports.
Once this was finished, I was able to access the endpoint. However I started getting another odd error,
flask.cli.NoAppException
flask.cli.NoAppException: Failed to find Flask application or factory in module
"osr". Use "FLASK_APP=osr:name to specify one.
I had not experienced this simply using docker run -it -p 5000:5000 osr_web which was odd. However I noticed I had not set the work directory in the Dockerfile.
I changed the Dockerfile to this:
FROM python:3.7.5-buster
EXPOSE 5000
ENV INSTALL_PATH /osr
ENV FLASK_APP osr
ENV FLASK_ENV development
RUN mkdir -p $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# added this line
WORKDIR $INSTALL_PATH
COPY . .
CMD [ "flask", "run", "--host=0.0.0.0"]
I believe you could get away without setting the WORK_DIR if you turn the flask application into a package and install it.

How to compose docker-compose.yml so i can access deamon's container from php?

I need help with Docker.
Lets say I have docker-compose.yml version 3 with Nginx+PHP. How do I add image vitr/casperjs so I can call it from PHP like
exec('casperjs --version', $output);
?
Any help is appreciated.
UPDATED:
It looks like correct answer would be: It is impossible.
You need to put PHP and CasperJS (and PhantoJS as well) to the same container to get them work together. It would be nice if someone might proof me wrong and show the better where to do it. Here is smth like working example:
FROM nanoninja/php-fpm
ENV PHANTOMJS_VERSION=phantomjs-2.1.1-linux-x86_64
ENV PHANTOMJS_DIR=/app/phantomjs
RUN apt-get update -y
RUN apt-get install -y apt-utils libfreetype6-dev libfontconfig1-dev wget bzip2
RUN wget --no-check-certificate https://bitbucket.org/ariya/phantomjs/downloads/${PHANTOMJS_VERSION}.tar.bz2
RUN tar xvf ${PHANTOMJS_VERSION}.tar.bz2
RUN mv ${PHANTOMJS_VERSION}/bin/phantomjs /usr/local/bin/
RUN rm -rf phantom*
RUN mkdir -p ${PHANTOMJS_DIR}
RUN echo '"use strict"; \n\
console.log("Hello, world!"); + \n\
console.log("using PhantomJS version " + \n\
phantom.version.major + "." + \n\
phantom.version.minor + "." + \n\
phantom.version.patch); \n\
phantom.exit();' \
> ${PHANTOMJS_DIR}/script.js
RUN apt-get update -y && apt-get install -y \
git \
python \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/n1k0/casperjs.git
RUN mv casperjs /opt/
RUN ln -sf /opt/casperjs/bin/casperjs /usr/local/bin/casperjs
Q: How to compose docker-compose.yml so i can access deamon's container from php?
A: You could share docker's unix domain socket to access daemon's container.
Something like follows:
docker-compose.yml:
version: '3'
services:
app:
image: ubuntu:16.04
privileged: true
volumes:
- /usr/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7
command: docker run --rm vitr/casperjs casperjs --version
test:
# docker-compose up
WARNING: Found orphan containers (abc_plop_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Recreating abc_app_1 ... done
Attaching to abc_app_1
app_1 | 1.1.4
abc_app_1 exited with code 0
You can see 1.1.4 was print by execute command docker run --rm vitr/casperjs casperjs --version in app container.
This is just an example, you can call docker run --rm vitr/casperjs casperjs --version in your own php container not use ubuntu:16.04, still use exec in php code and get the output.
Updated: (2018/11/05)
First I think some concepts need to be align with you:
-d: this means start a container in detached mode, not daemon. In docker, when we talk about daemon, it means docker daemon which used to accept the connection of docker cli, see here.
--rm: this just to delete the temp container after use it, you can also do not use it.
Difference for using -d & no -d:
With -d: it will run container in detached mode, this means even the container running, the cli command docker run, will exit at once & show you a container id, no any log you will see, like next:
# docker run -d vitr/casperjs casperjs --version
d8dc585bc9e3cc577cab15ff665b98d798d95bc369c876d6da31210f625b81e0
Without -d: the cli command will not exit until the command for container finish, so you can see the output of the command, like next:
# docker run vitr/casperjs casperjs --version
1.1.4
So, your requirement is want to get the output of casperjs, surely you had to use no -d mode, I think.
If you accept above concepts, then you can go on to see a workable example:
folder structure:
abc
├── docker-compose.yml
└── index.php
docker-compose.yml:
version: '3'
services:
phpfpm:
container_name: phpfpm
image: nanoninja/php-fpm
entrypoint: php index.php
privileged: true
volumes:
- .:/var/www/html
- /usr/bin/docker:/usr/bin/docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7
index.php:
<?php
exec('docker run vitr/casperjs casperjs --version', $output);
print_r($output);
test:
~/abc# docker-compose up
Starting phpfpm ... done
Attaching to phpfpm
phpfpm | Array
phpfpm | (
phpfpm | [0] => 1.1.4
phpfpm | )
phpfpm exited with code 0
You can see 1.1.4 was print through php, attention privileged & volumes are things had to be set.

Docker container builds on OSX but not Amazon Linux

My Docker container builds fine on OSX:
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
But doesn't build on Amazon Linux:
Docker version 17.12.0-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3
docker-compose version 1.20.1, build 5d8c71b
Full Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
Full docker-compose.yml
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
The command I'm running is: docker-compose up --build
The result on Amazon Linux is:
Running setup.py install for pymongo: started
Running setup.py install for pymongo: finished with status 'done'
Running setup.py install for pluggy: started
Running setup.py install for pluggy: finished with status 'done'
Running setup.py install for coverage: started
Running setup.py install for coverage: finished with status 'done'
Successfully installed Faker-0.8.12 Flask-0.12.2 Flask-Cors-3.0.3 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 astroid-1.6.2 attrs-17.4.0 backports.functools-lru-cache-1.5 beautifulsoup4-4.5.1 click-6.7 configparser-3.5.0 coverage-4.5.1 enum34-1.1.6 funcsigs-1.0.2 futures-3.2.0 gunicorn-19.7.1 ipaddress-1.0.19 isort-4.3.4 itsdangerous-0.24 lazy-object-proxy-1.3.1 mccabe-0.6.1 more-itertools-4.1.0 pluggy-0.6.0 py-1.5.3 py4j-0.10.6 pylint-1.8.3 pymongo-3.6.1 pytest-3.5.0 pytest-cov-2.5.1 python-dateutil-2.7.2 singledispatch-3.4.0.3 six-1.11.0 text-unidecode-1.2 wrapt-1.10.11
You are using pip version 8.1.2, however version 9.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
npm WARN deprecated redux-mock-store#1.5.1: breaking changes in minor version
> base62#1.2.7 postinstall /lts-app/node_modules/base62
> node scripts/install-stats.js || exit 0
ERROR: Service 'web' failed to build: The command '/bin/sh -c pip install -r "requirements.txt" && npm install --no-optional && npm run build' returned a non-zero code: 1
Does anyone know what might be causing this discrepancy? The error message from Docker doesn't give many clues. I'd be very grateful for any ideas others can offer!
To solve this problem, I followed #MazelTov's advice and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.

docker-compose up gives error: bash: sails: command not found

I have a docker-compose.yml file with the following content:
version: '2'
services:
MongoDB:
image: mongo
Parrot-API:
build: ./Parrot-API
image: sails-js:dev
volumes:
- "/user/Code/node/Parrot-API:/host"
command: bash -c "cd /host && sails lift"
links:
- MongoDB:MongoDB
ports:
- "3050:1337"
The file basically runs two containers: mongodb and web app (in directory ./Parrot-API) built in sails.js. However, when I run docker-compose up in the terminal, I got this error: Parrot-API_1 | bash: sails: command not found
node_Parrot-API_1 exited with code 127. Note that sails.js is a node.js web framework, and sails lift starts the app at port 1337.
I have done some google search and have found some similar questions, but not helpful in my case.
btw, I have the following Dockerfile in the Parrot-API folder:
FROM sails-js:dev
VOLUME /host
WORKDIR /host
RUN rm -rf node_modules && \
echo "hello world!" && \
pwd && \
ls -lrah
EXPOSE 1337
CMD npm install -g sails && npm install && sails lift
The file structure is following:
|- docker-compose.yml
|- Parrot-API/Dockerfile
|- Parrot-API/app.js, etc..
It is clear to me that the Parrot-API docker container exits immediately due to the reason that sails lift command is not executed, but how to make the container work? Thanks!
You showed a docker-compose.yml that builds a sails-js:dev image, and you showed a Dockerfile that is based on the sails-js:dev image. This appears to be recursive.
Your Dockerfile itself ends with a CMD in lieu of an ENTRYPOINT that does the npm install of sails. Since you did this as a CMD instead of a RUN, sails is not installed in your image, the install is launched on a container run, but only if you don't run the container with any arguments of your own, like you are doing in the docker-compose.yml with a custom command.
The fix is to update the Dockerfile with a proper base image and change the CMD to a RUN. I'm also seeing a few other mistakes like creating a volume and then modifying the contents, where volumes ignore other changes after they have been created. The FROM node is just a guess based on your npm commands, feel free to adjust:
FROM node
RUN mkdir -p /host && cd /host && npm install -g sails && npm install
EXPOSE 1337
WORKDIR /host
VOLUME /host
CMD sails lift

Resources