I'm trying to put Flask API app to the docker container. All work fine for building docker image as well running from docker compose except when I will do docker-compose up -d it will show status of a docker compose as "stopping" when a container under it shows as "running"
Current Dockerfile looks like
FROM python:3.7.7-alpine3.11
COPY app /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5555
ENTRYPOINT ["python3"]
CMD ["app.py"]
and docker-compose.yml
version: '3'
services:
app:
build: .
ports:
- "3000:5555"
volumes:
- ./app:/app
Docker compose logs:
Attaching to python-api_app_1
app_1 | DEBUG:root:Starting app
app_1 | * Serving Flask app "app" (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: This is a development server. Do not use it in a production deployment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | INFO:werkzeug: * Running on http://0.0.0.0:5555/ (Press CTRL+C to quit)
app_1 | INFO:werkzeug: * Restarting with stat
app_1 | DEBUG:root:Starting app
app_1 | WARNING:werkzeug: * Debugger is active!
app_1 | INFO:werkzeug: * Debugger PIN: 791-950-860
app_1 | DEBUG:root:Starting app
app_1 | * Serving Flask app "app" (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: This is a development server. Do not use it in a production deployment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | INFO:werkzeug: * Running on http://0.0.0.0:5555/ (Press CTRL+C to quit)
app_1 | INFO:werkzeug: * Restarting with stat
app_1 | DEBUG:root:Starting app
app_1 | WARNING:werkzeug: * Debugger is active!
app_1 | INFO:werkzeug: * Debugger PIN: 791-950-860
Any tips on that case why it is reported that way?
Related
I have a Docker installation that I would like to start with docker compose up (and not have to run 2 extra ttys ) so I added a Procfile.dev looking like this
web: bin/rails server -p 3000 -b '0.0.0.0'
js: yarn build_js --watch
css: yarn build_css --watch
The output is, however, less than enjoyable
√ mindling % docker compose up
[+] Running 3/0
⠿ Container mindling_redis Running 0.0s
⠿ Container mindling_db Running 0.0s
⠿ Container mindling_mindling_1 Created 0.0s
Attaching to mindling_db, mindling_1, mindling_redis
mindling_1 | 19:54:04 web.1 | started with pid 16
mindling_1 | 19:54:04 js.1 | started with pid 19
mindling_1 | 19:54:04 css.1 | started with pid 22
mindling_1 | 19:54:06 css.1 | yarn run v1.22.17
mindling_1 | 19:54:06 js.1 | yarn run v1.22.17
mindling_1 | 19:54:06 js.1 | $ esbuild app/javascript/*.* --bundle --outdir=app/assets/builds --watch
mindling_1 | 19:54:06 css.1 | $ tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css --watch
mindling_1 | 19:54:08 js.1 | Done in 2.02s.
mindling_1 | 19:54:08 js.1 | exited with code 0
mindling_1 | 19:54:08 system | sending SIGTERM to all processes
mindling_1 | 19:54:08 web.1 | terminated by SIGTERM
mindling_1 | 19:54:09 css.1 | terminated by SIGTERM
mindling_1 exited with code 0
I've tried running a Bash in the application container - and calling the Procfile in a tty by itself looks more or less like this:
root#facfb249dc6b:/app# foreman start -f Procfile.dev
20:11:45 web.1 | started with pid 12
20:11:45 js.1 | started with pid 15
20:11:45 css.1 | started with pid 18
20:11:48 css.1 | yarn run v1.22.17
20:11:48 js.1 | yarn run v1.22.17
20:11:48 css.1 | $ tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css --watch
20:11:49 js.1 | $ esbuild app/javascript/*.* --bundle --outdir=app/assets/builds --watch
20:11:50 js.1 | [watch] build finished, watching for changes...
20:11:53 web.1 | => Booting Puma
20:11:53 web.1 | => Rails 7.0.0 application starting in development
20:11:53 web.1 | => Run `bin/rails server --help` for more startup options
20:11:57 web.1 | Puma starting in single mode...
20:11:57 web.1 | * Puma version: 5.5.2 (ruby 3.0.3-p157) ("Zawgyi")
20:11:57 web.1 | * Min threads: 5
20:11:57 web.1 | * Max threads: 5
20:11:57 web.1 | * Environment: development
20:11:57 web.1 | * PID: 22
20:11:57 web.1 | * Listening on http://0.0.0.0:3000
20:11:57 web.1 | Use Ctrl-C to stop
20:11:58 css.1 |
20:11:58 css.1 | Rebuilding...
20:11:59 css.1 | Done in 1066ms.
^C20:13:23 system | SIGINT received, starting shutdown
20:13:23 web.1 | - Gracefully stopping, waiting for requests to finish
20:13:23 web.1 | === puma shutdown: 2021-12-22 20:13:23 +0000 ===
20:13:23 web.1 | - Goodbye!
20:13:23 web.1 | Exiting
20:13:24 system | sending SIGTERM to all processes
20:13:25 web.1 | terminated by SIGINT
20:13:25 js.1 | terminated by SIGINT
20:13:25 css.1 | terminated by SIGINT
root#facfb249dc6b:/app#
What is going on? It works when doing it 'by hand' but if I let docker-compose rip the processes somehow terminates!?!
I have isolated the issue to the build_css script in package.json (or at least it does keep going if I comment that line in the Procfile.dev)
All the 'dirty linen'
My package.json looks like this
{
...8<...
"scripts": {
"build_js": "esbuild app/javascript/*.* --bundle --outdir=app/assets/builds",
"build_css": "tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css"
},
...8<...
}
My containers are exceptionally boring, looking like almost everybody else's:
FROM ruby:3.0.3
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y nodejs yarn
WORKDIR /app
COPY src/Gemfile /app/Gemfile
COPY src/Gemfile.lock /app/Gemfile.lock
RUN gem install bundler foreman && bundle install
EXPOSE 3000
ENTRYPOINT [ "entrypoint.sh" ]
version: "3.9"
db:
build: mysql
image: mindling_db
container_name: mindling_db
command: [ "--default-authentication-plugin=mysql_native_password" ]
ports:
- "3306:3306"
volumes:
- ~/src/mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: mindling_development
mindling:
platform: linux/x86_64
build: .
volumes:
- ./src:/app
ports:
- "3000:3000"
depends_on:
- db
and finally my entrypoint.sh
#!/usr/bin/env bash
rm -rf /app/tmp/pids/server.pid
foreman start -f Procfile.dev
Allow me to give credit to they who deserve it!! The correct answer was provided by earlopain in this issue on rails/rails
It's actually an almost embarrassingly easy fix - once you know it :)
Add tty: true to your docker-compose.yml - like this
mindling:
platform: linux/x86_64
build: .
tty: true
volumes:
- ./src:/app
ports:
- "3000:3000"
depends_on:
- db
Thanks Earlopain & #walt_die, you saved my day. Writing this answer because I had a bit of explanation which didn't fit in the comment.
Just like yours, when trying to run rails in docker using docker-compose the problem I was facing was that CMD bin/dev in dockerfile was constantly crashing, although it worked when ran manually via bash.
The issue was not with tailwindcss but esbuild instead. This line js: yarn build --watch in Procfile.dev was failing because it runs esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds --public-path=assets under the hood, and as mentioned by evanw in esbuild issue esbuild exits when stdin is closed.
So, the solution of adding tty: true to docker-compose.yml as above works.
Alternatively, one can also remove/comment out this line js: yarn build --watch from Procfile.dev works. But this won't compile the JS changes. So, one can jump into bash of the running container and manually run yarn build --watch
I have built a Docker Compose project which worked without problems in my container until Ubuntu 18.04.
Now I have updated the project to version Ubunut 20.04 and I get the following error message when starting a shell script Docker Entrypoint:
#!/usr/bin/env bash
if [ ! -f /usr/local/etc/piler/config-site.php ]
then
sleep 20
cd /root/mailpiler/piler/
cp util/postinstall.sh util/postinstall.sh.bak
sed -i "s/ SMARTHOST=.*/ SMARTHOST="\"$MAILSERVER_DOMAIN\""/" util/postinstall.sh
sed -i 's/ WWWGROUP=.*/ WWWGROUP="www-data"/' util/postinstall.sh
sed -i "s/ "
echo -e "y\n\n$PILER_DB_HOST\n$MYSQL_DATABASE\n$MYSQL_USER\n$MYSQL_PASSWORD\n$MYSQL_ROOT_PASSWORD\n\n\nY\nY\n" | make postinstall
# echo -e "y\n$PILER_DB_HOST\n\n$MYSQL_DATABASE\n$MYSQL_USER\n$MYSQL_PASSWORD\n$MYSQL_ROOT_PASSWORD\n\n\n\ny\ny\n" | make postinstall
fi
The command echo -e "y\n\n$PILER_DB_HOST\n$MYSQL_DATABASE\n$MYSQL_USER\n$MYSQL_PASSWORD\n$MYSQL_ROOT_PASSWORD\n\n\nY\nY\n" | make postinstall went without problems so far but since Ubuntu 20.04 the following error is coming:
app_1 | This is the postinstall utility for piler
app_1 | It should be run only at the first install. DO NOT run on an existing piler installation!
app_1 |
app_1 |
app_1 | Continue? [Y/N] [N]
app_1 |
app_1 | Please enter the webserver groupname [www-data]
app_1 | Please enter mysql hostname [localhost]
app_1 | Please enter mysql database [piler]
app_1 | Please enter mysql user name [piler] stty:
'standard input': Inappropriate ioctl for device
'standard input': Inappropriate ioctl for device
app_1 | make: *** [Makefile:126: postinstall] Error 1
The script:
https://bitbucket.org/jsuto/piler/src/master/util/postinstall.sh.in
I'm sure it has something to do with the stty mode and Docker, but I don't know how to fix it.
By the way the script works without dockerized under Ubuntu 20.04 without problems.
My Docker-Compose file:
version: "3.2"
services:
app:
build:
context: ./docker/images
stdin_open: true
tty: true
env_file: .env
depends_on:
- db
ports:
- "25:25"
- "80:80"
restart: always
volumes:
- ./docker/data/piler-local:/usr/local/etc/piler
- ./docker/data/piler-data:/var/piler
db:
image: mariadb:10.4
env_file: .env
restart: always
ports:
- "3306:3306"
volumes:
- ./docker/data/mysql-data:/var/lib/mysql
I'm trying to start a simple Flask "Hello world" app in a docker container but I keep getting this error: "OSError: [Errno 8] Exec format error: '/app/app.py'"
My host operating system is Windows 10.
My Dockerfile:
FROM python:3.6
ENV PYTHONBUFFERED 1
ADD . /app
WORKDIR /app
RUN pip install -r requirements.txt
I have requirements.txt with Flask==1.0.2.
app.py:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000, debug=True)
and docker-compose.yml:
version: '3'
services:
app:
build: .
command: python app.py
ports:
- "8000:8000"
Whole log of container:
app_1 | * Serving Flask app "app" (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: Do not use the development server in a production environment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | Traceback (most recent call last):
app_1 | File "app.py", line 9, in <module>
app_1 | app.run(host='0.0.0.0', port=8000, debug=True)
app_1 | File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 943, in run
app_1 | run_simple(host, port, self, **options)
app_1 | File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 988, in run_simple
app_1 | run_with_reloader(inner, extra_files, reloader_interval, reloader_type)
app_1 | File "/usr/local/lib/python3.6/site-packages/werkzeug/_reloader.py", line 332, in run_with_reloader
app_1 | sys.exit(reloader.restart_with_reloader())
app_1 | File "/usr/local/lib/python3.6/site-packages/werkzeug/_reloader.py", line 176, in restart_with_reloader
app_1 | exit_code = subprocess.call(args, env=new_environ, close_fds=False)
app_1 | File "/usr/local/lib/python3.6/subprocess.py", line 287, in call
app_1 | with Popen(*popenargs, **kwargs) as p:
app_1 | File "/usr/local/lib/python3.6/subprocess.py", line 729, in __init__
app_1 | restore_signals, start_new_session)
app_1 | File "/usr/local/lib/python3.6/subprocess.py", line 1364, in _execute_child
app_1 | raise child_exception_type(errno_num, err_msg, err_filename)
app_1 | OSError: [Errno 8] Exec format error: '/app/app.py'
flaskdockerproject_app_1 exited with code 1
UPDATE
After I added the shebang in app.py like #larsks said I'm getting this error:
"FileNotFoundError: [Errno 2] No such file or directory: '/app/app.py': '/app/app.py'.
All the files are in the container and in the right place.
I hit the same problem (Exec format error, then FileNotFound if I added the shebang).
Adding "RUN chmod 644 app.py" to the Dockerfile fixed it for me, as mentioned here: https://github.com/pallets/werkzeug/issues/1482
Spent the entire day yesterday putting the pieces together on this problem, forum by forum, so I want to share a more detailed answer to this question. While building my image, I was also seeing the following warning message:
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host.
"That warning was added, because the Windows filesystem does not have an option to mark a file as 'executable'. Building a linux image from a Windows machine would therefore break the image if a file has to be marked executable." - https://github.com/moby/moby/issues/20397
So, as Richard Chamberlain points out in the comment above, adding "RUN chmod 644.py" ensures that the app.py file is properly marked.
Putting all the pieces together, here is the complete Dockerfile that worked for me - Really hope it helps the next person struggling with this issue!
FROM python:3.7-alpine
COPY . /app
WORKDIR /app
RUN apk add --no-cache --virtual .build-deps \
ca-certificates gcc postgresql-dev linux-headers musl-dev \
libffi-dev jpeg-dev zlib-dev \
&& pip install --no-cache -r requirements.txt
RUN chmod 644 app.py
CMD ["python","app.py"]
I had an existing Ionic app which I have dockerized. The build and up commands are successful and I can access the app at http://localhost:8100/ionic-lab. However, hot reload doesn't work. Whenever I edit an HTML or CSS, those changes are nor reflected.
My dockerfile:
FROM node:8
COPY package.json /opt/library/
WORKDIR /opt/library
RUN npm install -g cordova ionic && cordova telemetry off
# && echo n | ionic start dockerized-ionic-app --skip-npm --v2 --ts
RUN npm install && npm cache verify
COPY . /opt/library
#CMD ["ionic", "serve", "--all"]
And docker-compose.yml:
app:
build: .
ports:
- '8100:8100'
- '35729:35729'
volumes:
- .:/opt/library
- /opt/library/node_modules
command: ionic serve --lab
Why is it happening? What is missing?
UPDATE:
Output of docker-compose build --no-cache
D:\Development\personal_projects\library>docker-compose build --no-cache
Building app
Step 1/6 : FROM node:8
---> b87c2ad8344d Step 2/6 : COPY package.json /opt/library/
---> 4422d0333b92
Step 3/6 : WORKDIR /opt/library
Removing intermediate container 1cfdd60477f9 ---> 1ca3dc5f5bd6 Step 4/6 : RUN npm install -g cordova ionic && cordova telemetry off
---> Running in d7e9bf4e6d7b
/usr/local/bin/cordova -> /usr/local/lib/node_modules/cordova/bin/cordova
/usr/local/bin/ionic -> /usr/local/lib/node_modules/ionic/bin/ionic
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/ionic/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
+ cordova#8.0.0
+ ionic#3.19.1
added 660 packages in 29.173s
You have been opted out of telemetry. To change this, run: cordova telemetry on.
Removing intermediate container d7e9bf4e6d7b
---> 3fedee0878af
Step 5/6 : RUN npm install && npm cache verify
---> Running in 8d482b23f6bb
> node-sass#4.5.3 install /opt/library/node_modules/node-sass
> node scripts/install.js
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-57_binding.node
Download complete
Binary saved to /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Caching binary to /root/.npm/node-sass/4.5.3/linux-x64-57_binding.node
> uglifyjs-webpack-plugin#0.4.6 postinstall /opt/library/node_modules/uglifyjs-webpack-plugin
> node lib/post_install.js
> node-sass#4.5.3 postinstall /opt/library/node_modules/node-sass
> node scripts/build.js
Binary found at /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Testing binary
Binary is fine
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 548 packages in 30.281s
Cache verified and compressed (~/.npm/_cacache):
Content verified: 1476 (55779072 bytes)
Index entries: 2306
Finished in 9.736s
Removing intermediate container 8d482b23f6bb
---> 5815e391f2c6
Step 6/6 : COPY . /opt/library
---> 5cc9637a678c
Successfully built 5cc9637a678c
Successfully tagged library_app:latest
D:\Development\personal_projects\library>
And output of docker-compose up:
D:\Development\personal_projects\library>docker-compose up
Recreating library_app_1 ... done
Attaching to library_app_1
Starting app-scripts server: --address 0.0.0.0 --port 8100 --livereload-port 35729 --dev-logger-port 53703 --nobrowser --lab - Ctrl+C to cancel
app_1 | [14:45:19] watch started ...
app_1 | [14:45:19] build dev started ...
app_1 | [14:45:19] clean started ...
app_1 | [14:45:19] clean finished in 78 ms
app_1 | [14:45:19] copy started ...
app_1 | [14:45:19] deeplinks started ...
app_1 | [14:45:20] deeplinks finished in 60 ms
app_1 | [14:45:20] transpile started ...
app_1 | [14:45:24] transpile finished in 4.54 s
app_1 | [14:45:24] preprocess started ...
app_1 | [14:45:24] preprocess finished in 1 ms
app_1 | [14:45:24] webpack started ...
app_1 | [14:45:24] copy finished in 5.33 s
app_1 | [14:45:31] webpack finished in 6.73 s
app_1 | [14:45:31] sass started ...
app_1 | [14:45:32] sass finished in 1.46 s
app_1 | [14:45:32] postprocess started ...
app_1 | [14:45:32] postprocess finished in 40 ms
app_1 | [14:45:32] lint started ...
app_1 | [14:45:32] build dev finished in 13.64 s
app_1 | [14:45:32] watch ready in 13.73 s
app_1 | [14:45:32] dev server running: http://localhost:8100/
app_1 |
[OK] Development server running!
app_1 | Local: http://localhost:8100
app_1 | External: http://172.17.0.2:8100
app_1 | DevApp: library#8100 on 1643dcb6c0d7
app_1 |
app_1 | [14:45:35] lint finished in 2.51 s
Your Dockerfile and Docker-Compose does exactly what is needed.
With the - .:/opt/library line the volume gets mounted correctly and your local changes will take effect in the container as well.
If you are on Windows the problem is that Hyper-V is not capable of propagating local file changes correctly into the container. Therefore the serve program is not able to catch file changes.
The solution is to use ng serve directly and enable polling by running ng serve with the poll flag: ng serve --poll 200 --host=0.0.0.0 --port=8100.
--poll 200 is looking actively for file changes every 200ms
--host=0.0.0.0 set the host. 0.0.0.0 is used to be reachable from other containers
--port=8100 is used to get the same port as ionic serve uses (just for convinience)
You said "hot reload doesn't work", this is correct.
if you re-build docker container then only you will see code changes, because your source code needs to get copy inside your docker-container.
just run docker-compose up -d or rebuild docker container then you should see your code changes.
You are mapping local 8100 port with cointainer 8100 port, this is ok. You are running ionic from a container, in an External way.
Try with “ionic serve --external”
here are my configurations:
docker-compose.yml
---
web:
build: .
command: RAILS_ENV=production bundle exec rake assets:precompile --trace
command: foreman start
ports:
- "3000:3000"
links:
- postgres
environment:
- RAILS_ENV=production
- RACK_ENV=production
- POSTGRES_DATABASE=postgres
- POSTGRES_USERNAME=postgres
- POSTGRES_HOST=db
postgres:
image: postgres
Procfile
web: bundle exec puma -e _env:RAILS_ENV -C config/puma.rb
nginx: /usr/sbin/nginx -g 'daemon off;'
Dockerfile
# Generated by Cloud66 Starter
FROM ruby:2.2.3
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get -y install curl \
git \
imagemagick \
libmagickwand-dev \
libcurl4-openssl-dev \
nodejs \
postgresql-client
# Installing your gems this way caches this step so you dont have to reintall your gems every time you rebuild your image.
# More info on this here: http://ilikestuffblog.com/2014/01/06/how-to-skip-bundle-install-when-deploying-a-rails-app-to-docker/
# Copy the Gemfile and Gemfile.lock into the image.
# Temporarily set the working directory to where they are.
WORKDIR /tmp
ADD Gemfile Gemfile
ADD Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
# Install and configure nginx
RUN apt-get install -y nginx
RUN rm -rf /etc/nginx/sites-available/default
ADD config/nginx.conf /etc/nginx/nginx.conf
# Add our source files precompile assets
ENV APP_HOME /app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
ADD . $APP_HOME
# RUN RAILS_ENV=production bundle exec rake assets:precompile --trace
I build docker container with docker-compose and was successful:
docker-compose build
And here is output for docker-compose up
docker-compose up
⇒ docker-compose up
Starting watchhound_postgres_1
Starting watchhound_web_1
Attaching to watchhound_postgres_1, watchhound_web_1
postgres_1 | LOG: database system was interrupted; last known up at 2016-06-24 08:58:25 UTC
postgres_1 | LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | LOG: invalid record length at 0/1707C48
postgres_1 | LOG: redo is not required
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
web_1 | 09:04:46 web.1 | started with pid 6
web_1 | 09:04:46 nginx.1 | started with pid 7
web_1 | 09:04:47 web.1 | [6] Puma starting in cluster mode...
web_1 | 09:04:47 web.1 | [6] * Version 3.4.0 (ruby 2.2.3-p173), codename: Owl Bowl Brawl
web_1 | 09:04:47 web.1 | [6] * Min threads: 5, max threads: 5
web_1 | 09:04:47 web.1 | [6] * Environment: _env:RAILS_ENV
web_1 | 09:04:47 web.1 | [6] * Process workers: 1
web_1 | 09:04:47 web.1 | [6] * Phased restart available
web_1 | 09:04:47 web.1 | [6] * Listening on tcp://0.0.0.0:5000
web_1 | 09:04:47 web.1 | [6] * Listening on unix:///var/run/puma.sock
web_1 | 09:04:47 web.1 | [6] Use Ctrl-C to stop
web_1 | 09:04:49 web.1 | [6] - Worker 0 (pid: 12) booted, phase: 0
PROBLEM
Everything looks fine, but when I visit 192.168.99.100:5000 (from docker-machine ip) the browser says 192.168.99.100 refused to connect
Not sure what am I missing
My problem was with docker-compose.yml file, need to bind port 5000 not 3000 as per my overall configuration.