Rails server not starting with Docker - ruby-on-rails

I am trying to deploy a Ruby on Rails project using docker, but am having some issues. My Dockerfile looks like this
FROM ruby:2.6.3
WORKDIR /usr/src/app
RUN apt-get update -qq && \
apt-get install -y nodejs && \
gem install --no-document rails -v 5.2.3
COPY ./Gemfile ./Gemfile.lock ./
COPY ./.gemrc ~/
RUN printf "gem: --no-rdoc --no-ri --no-document" | tee /etc/gemrc ~/.gemrc && \
gem install bundler -v '2.1.4'
RUN bundle install --jobs 2
COPY . .
COPY ./docker-entrypoint.sh /usr/bin
RUN chmod +x /usr/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3000
CMD ["rails", "server", "puma"]
and docker-entrypoint.sh is used to ensure the server does not reuse a pid and looks like this
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
exec bundle exec "$#"
The image builds successfully with docker build -t rails_app , but when I run it with docker run -p 3000:3000 rails_app, no errors are thrown, but no output is shown to the screen. Then when I hit Control-C to stop the container, it gives me the output of the puma server starting then stops itself.
docker run -p 3000:3000 rails_app
^C=> Booting Puma
=> Rails 5.2.4.2 application starting in development
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.12.4 (ruby 2.6.3-p62), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
- Gracefully stopping, waiting for requests to finish
=== puma shutdown: 2020-04-19 21:42:28 +0000 ===
Goodbye!
Exiting
What needs to be done for the container to start the server before I close the container? Additionally, before hitting Control-C nothing is found at localhost:3000 so the server has not started silently.

Related

ERR_CONNECTION_REFUSED occurs when accessing a public IP address in EC2

I have created an infrastructure (application) using AWS, Docker (docker-compose), and Rails.
After launching the container in EC2 and starting the rails server, I get "ERR_CONNECTION_REFUSED" when I access the public IP address.
I would like to know how to display the application screen.
My directory structure looks like:
kthr01/
docker-compose.yml
Dockerfile
start.sh
src/
app/
bin/
....
Dockerfile
FROM ruby:2.7
ENV RAILS_ENV=production
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& apt-get update -qq \
&& apt-get install -y nodejs yarn \
&& apt-get install -y vim
WORKDIR /app
COPY ./src /app
RUN bundle config --local set path 'vendor/bundle' \
&& bundle install
COPY start.sh /start.sh
RUN chmod 744 /start.sh
CMD ["sh", "/start.sh"]
docker-compose.yml
version: '3'
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
volumes:
- ./src/db/mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- ./src:/app
ports:
- "3000:3000"
depends_on:
- db
start.sh
#!/bin/sh
if [ "${RAILS_ENV}" = "production" ]
then
bundle exec rails assets:precompile
fi
bundle exec rails s -p ${PORT:-3000} -b 0.0.0.0
I have a screen shot of the AWS console showing the EC2 security group setup:
To run this:
Verify that the container is created in the local environment and appears correctly on localhost:3000
↓
Connect to EC2 via ssh
↓
git clone
↓
docker-compose build
↓
docker-compose up
...
web_1 | => Booting Puma
web_1 | => Rails 6.1.4.4 application starting in production
web_1 | => Run `bin/rails server --help` for more startup options
web_1 | Puma starting in single mode...
web_1 | * Puma version: 5.6.1 (ruby 2.7.5-p203) ("Birdie's Version")
web_1 | * Min threads: 5
web_1 | * Max threads: 5
web_1 | * Environment: production
web_1 | * PID: 1
web_1 | * Listening on http://0.0.0.0:3000
web_1 | Use Ctrl-C to stop
↓
Access the public IP address of EC2
↓
ERR_CONNECTION_REFUSED is displayed
Based on the comments.
The issue was blocked port 3000 in a security group. The solution was to allow that port.

docker-compose rails app not accessible on port 3000

I'm building docker containers for a simple rails/postgres app. The rails app has started and is listening on port 3000. I have exposed port 3000 for the rails container. However, http://localhost:3000 is responding with ERR_EMPTY_RESPONSE. I assumed that the rails container should be accessible on port 3000. Is there something else I need to do?
greg#MemeMachine ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eed45208bbda realestate_web "entrypoint.sh bash …" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp realestate_web_1
a9cb8cae310e postgres "docker-entrypoint.s…" About a minute ago Up About a minute 5432/tcp realestate_db_1
greg#MemeMachine ~ $ docker logs realestate_web_1
=> Booting Puma
=> Rails 6.0.2.2 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 3.12.4 (ruby 2.6.3-p62), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
greg#MemeMachine ~ $ curl http://localhost:3000
curl: (52) Empty reply from server
Dockerfile
FROM ruby:2.6.3
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN gem install bundler -v 2.0.2
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
env_file:
- '.env'
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
env_file:
- '.env'
entrypoint.sh
#!/bin/bash
# Compile the assets
bundle exec rake assets:precompile
# Start the server
bundle exec rails server
When you provide both an ENTRYPOINT and a CMD, Docker combines them together into a single command. If you just docker run your image as it's built, the entrypoint script gets passed the command part rails server -b 0.0.0.0 as command-line parameters; but it ignores this and just launches the Rails server itself (in this case, without the import -b 0.0.0.0 option).
The usual answer to this is to not try to run the main process directly in the entrypoint, but instead end the script with exec "$#" to run the command from additional arguments.
In this case, there are two additional bits. The command: in the docker-compose.yml file indicates that there's some additional setup that needs to be done in the entrypoint (you should not need to override the image's command to run the same server). You also need the additional environment setup that bundle exec provides. Moving this all into the entrypoint script, you get
#!/bin/sh
# ^^^ this script only uses POSIX shell features
# Compile the assets
bundle exec rake assets:precompile
# Clean a stale pid file
rm -f tmp/pids/server.pid
# Run the main container process, inside the Bundler context
exec bundle exec "$#"
Your Dockerfile can stay as it is; you can remove the duplicate command: from the docker-compose.yml file.
* Listening on tcp://localhost:3000
This logline makes me think rails is binding to only the localhost ip. This means that rails will only listen to requests from within the container. To make rails bind to all ips, and listen to requests from outside the container you use the rails server -b parameter. The last line in your entrypoint.sh should change to:
bundle exec rails server -b 0.0.0.0

Docker (rails, postgresql) hanging when started, rather than connecting to DB

I'm new to docker, and trying to workout why my Docker setup is hanging and not connecting like I expect it to.
I'm running
Docker version 18.09.2, build 6247962
docker-compose version 1.23.2, build 1110ad01
OSX 10.14.5
My setup is based on this Gist that I found.
I've reduced it somewhat, to better demonstrate the issue.
Dockerfile
FROM ruby:2.4
ARG DEBIAN_FRONTEND=noninteractive
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main" >> /etc/apt/sources.list.d/postgeresql.list \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& apt-get update \
&& apt-get update \
&& apt-get install -y --no-install-recommends apt-utils \
&& apt-get install -y build-essential \
&& apt-get install -y nodejs \
&& apt-get install -y --no-install-recommends \
postgresql-client-9.6 pv ack-grep ccze unp htop vim \
&& apt-get install -y libxml2-dev libxslt1-dev \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge -y --auto-remove
# Set environment
ENV APP_HOME /usr/src/app
ENV BUNDLER_VERSION 2.0.2
# Setup bundler
RUN gem install bundler -v $BUNDLER_VERSION
WORKDIR $APP_HOME
EXPOSE 7051
CMD ["bundle", "exec", "puma", "-p", "7051", "-C", "config/puma.rb"]
docker_compose.yml
version: '3.1'
services:
app: &app_base
build: .
working_dir: /usr/src/app
volumes:
- .:/usr/src/app
# to be able to forward ssh-agent to github through capistrano (bundle on server)
- "~/.ssh/id_rsa:/root/.ssh/id_rsa"
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
environment: &app_environment
# to keep bundle effect between container restarts (without rebuild):
BUNDLE_PATH: /usr/src/app/.bundle
BUNDLE_APP_CONFIG: /usr/src/app/.bundle
DATABASE_HOST: db
SSH_AUTH_SOCK: # this left empty copies from outside env
env_file: '.env'
ports:
- "7051:7051"
depends_on:
- db
db:
image: postgres:9.5.17
ports:
- "5432:5432"
environment:
POSTGRES_DB: my_project_development
POSTGRES_USER: root
POSTGRES_PASSWORD: root
config/database.yml
development:
adapter: postgresql
encoding: unicode
pool: 5
database: my_project_development
username: root
password: root
host: db
config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count
# Specifies the `port` that Puma will listen on to receive requests, default is 3000.
#
port ENV.fetch("PORT") { 7051 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
So what I'm doing is:
Running docker-compose build to first build the images & containers
Running docker-compose run --rm app bundle install to install the gems
Running docker-compose run --rm app bundle exec rake db:create db:migrate db:seed to create/migrate/seed the database
Step 3. is the step I am stuck on. It just hangs there with no feedback:
docker-compose run --rm app bundle exec rake db:create db:migrate db:seed
Starting my_project_db_1 ... done
I know the database is running, as I can connect to it locally.
I can also log into the app container, and connect via psql, so I know that the app container can talk to the db container:
docker exec -it f6d6edadaed4 /bin/bash (52s 699ms)
root#f6d6edadaed4:/usr/src/app# psql "postgresql://root:root#db:5432/my_project_development"
psql (9.6.14, server 9.5.17)
Type "help" for help.
my_project_development=# \dt
No relations found.
If I try to boot the app with docker-compose up, then it also just hangs:
app_1 | Puma starting in single mode...
app_1 | * Version 3.11.4 (ruby 2.4.6-p354), codename: Love Song
app_1 | * Min threads: 5, max threads: 5
app_1 | * Environment: ci
I.e. puma would normally show a 'listening' message once connected:
* Listening on tcp://0.0.0.0:7051
Use Ctrl-C to stop
But it's not getting to that point, it just hangs.
What could be going on? Why can't my Rails container just connect to the PostgreSQL container and have puma boot normally?
MORE INFORMATION:
I've learn't now, if I wait 10+ minutes, it does eventually boot!
During that 10 mins, my CPU fans are spinning like crazy, so it's really thinking about something.
But when it finishes, the CPU fans shut off, and puma has booted and I can access it locally at http://127.0.0.1:7051 like I would expect.
Why would it be so slow to startup? My machine is otherwise pretty fast.
I think Docker on OSX is just extremely slow. I've since read about some performance issues here
Adding a cached option to the volume seems to have reduced the boot time to ~2mins
version: '3.1'
services:
app: &app_base
build: .
working_dir: /usr/src/app
volumes:
- .:/usr/src/app:cached
...
Still not very acceptable in my opinion. Would love to know if there's anything else that can be done?
I found an actual working answer to this, which I also posted here: https://stackoverflow.com/a/58603025/172973
Basically, see the article here to see how to properly setup Dockerfile and docker-compose.yml, so that it performs well on OSX.
The main thing to understand:
To make Docker fast enough on MacOS follow these two rules: use :cached to mount source files and use volumes for generated content (assets, bundle, etc.).
So if anyone else comes across this, just follow the article or see my other answer.

ERR_CONNECTION_REFUSED by docker container

I'm new to Docker and trying to make a demo Rails app. I made a dockerfile that looks like this:
FROM ruby:2.2
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update && apt-get install -y \
build-essential \
nodejs
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p /app
WORKDIR /app
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
# Copy the main application.
COPY . ./
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
I then built it (no errors):
docker build -t demo .
And then run it (also no errors):
docker run -itP demo
=> Booting Puma
=> Rails 5.1.1 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.8.2 (ruby 2.2.7-p470), codename: Sassy Salamander
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:9292
Use Ctrl-C to stop
When I run a docker ps command in a separate terminal to determine the ports, I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55e8224f7c15 demo "bundle exec rails..." About an hour ago Up About an hour 0.0.0.0:32772->3000/tcp ecstatic_bohr
However, when I try to connect to it at either http://localhost:32772 or http://192.168.99.100:32772 using Chrome or via a curl command, I receive a "Connection refused".
When I run the app outside of docker on my local machine via bundle exec rails server command, it works fine. Note that I am using Docker Toolbox on my Win7 machine
What could I be doing wrong ?
I spend a couple of hours on this as well and this thread was really helpful. What i'm doing right now is accessing those services through the vm's ip address.
You can get your vm's address running:
docker-machine ls
then try to access your service using the host mapped port 37772, something like this:
http://<VM IP ADDRESS>:32772
Hope this helps.
The combination of the above tricks worked--
I had to use http://<VM IP ADDRESS>:32772 (localhost:32772 did NOT work), AND I had to fix my exposed port to match the TCP listening port of 9292.
I still don't understand why the TCP listening port defaulted to 9292 instead of 3000, but I'll look into that separately.
Thank you for the help!

How can I mount a volume from a data container while preserving the owner and permissions?

I'm using Fig and attempting to use a data volume container to share uploaded files between a Rails web server and a Resque worker running in another container. To do this the data volume container defines a /rails/public/system volume which is meant to be used to share these files. The Rails and Resque processes run as a rails user in their respective containers which are both based of the markb/litdistco image. All together the fig.yml looks like this:
redis:
image: redis:2.8.17
volumes_from:
- file
web:
image: markb/litdistco
command: /usr/bin/start-server /opt/nginx/sbin/nginx
ports:
- 80:8000
- 443:4430
environment:
DATABASE_URL:
links:
- redis
volumes_from:
- file
worker:
image: markb/litdistco
command: /usr/bin/start-server "bundle exec rake environment resque:work QUEUE=litdistco_offline RAILS_ENV=production"
environment:
DATABASE_URL:
links:
- redis
volumes_from:
- file
file:
image: markb/litdistco
command: echo "datastore"
volumes:
- /var/redis
- /rails/log
- ./config/container/ssl:/etc/ssl
When the web and worker containers are running, I can see the /rails/public/system directory in both, however it is owned by the root user in both containers and the permissions on the directory prevent the rails user from writing to this directory.
For reference there are two Dockerfiles which go into making the markb/litdistco container. The first defines a base image I use for local development (Dockerfile):
# This Dockerfile is based on the excellent blog post by SteveLTN:
#
# http://steveltn.me/blog/2014/03/15/deploy-rails-applications-using-docker/
#
# KNOWN ISSUES:
#
# * Upgrading passenger or ruby breaks nginx directives with absolute paths
# Start from Ubuntu base image
FROM ubuntu:14.04
MAINTAINER Mark Bennett <mark#burmis.ca>
# Update package sources
RUN apt-get -y update
# Install basic packages
RUN apt-get -y install build-essential libssl-dev curl
# Install basics
RUN apt-get -y install tmux vim
RUN apt-get install -y libcurl4-gnutls-dev
# Install libxml2 for nokogiri
RUN apt-get install -y libxslt-dev libxml2-dev
# Install mysql-client
RUN apt-get -y install mysql-client libmysqlclient-dev
# Add RVM key and install requirements
RUN command curl -sSL https://rvm.io/mpapis.asc | gpg --import -
RUN curl -sSL https://get.rvm.io | bash -s stable
RUN /bin/bash -l -c "rvm requirements"
# Create rails user which will run the app
RUN useradd rails --home /rails --groups rvm
# Create the rails users home and give them permissions
RUN mkdir /rails
RUN chown rails /rails
RUN mkdir -p /rails/public/system
RUN chown rails /rails/public/system
# Add configuration files in repository to filesystem
ADD config/container/start-server.sh /usr/bin/start-server
RUN chown rails /usr/bin/start-server
RUN chmod +x /usr/bin/start-server
# Make a directory to contain nginx and give rails user permission
RUN mkdir /opt/nginx
RUN chown rails /opt/nginx
# Switch to rails user that will run app
USER rails
# Install rvm, ruby, bundler
WORKDIR /rails
ADD ./.ruby-version /rails/.ruby-version
RUN echo "gem: --no-ri --no-rdoc" > /rails/.gemrc
RUN /bin/bash -l -c "rvm install `cat .ruby-version`"
RUN /bin/bash -l -c "gem install bundler --no-ri --no-rdoc"
# Install nginx
RUN /bin/bash -l -c "gem install passenger --no-ri --no-rdoc"
RUN /bin/bash -l -c "passenger-install-nginx-module"
ADD config/container/nginx-sites.conf.TEMPLATE /opt/nginx/conf/nginx.conf.TEMPLATE
ADD config/container/set-nginx-paths.sh /rails/set-nginx-paths.sh
RUN /bin/bash -l -c "source /rails/set-nginx-paths.sh"
# Copy the Gemfile and Gemfile.lock into the image.
# Temporarily set the working directory to where they are.
WORKDIR /tmp
ADD Gemfile Gemfile
ADD Gemfile.lock Gemfile.lock
# bundle install
RUN /bin/bash -l -c "bundle install"
# Add rails project to project directory
ADD ./ /rails
# set WORKDIR
WORKDIR /rails
# Make sure rails has the right owner
USER root
RUN chown -R rails:rails /rails
# Publish ports
EXPOSE 3000
EXPOSE 4430
EXPOSE 8000
This is tagged as the litdistco-base image, then I use config/containers/production/Dockerfile to generate the image that I tag as markb/litdistco and run in staging and production.
# Start from LitDistCo base image
FROM litdistco-base
MAINTAINER Mark Bennett <mark#burmis.ca>
USER rails
# Setup volumes used in production
VOLUME ["/rails/log", "/rails/public/system"]
# Build the application assets
WORKDIR /rails
RUN /bin/bash -l -c "touch /rails/log/production.log; chmod 0666 /rails/log/production.log"
RUN /bin/bash -l -c "source /etc/profile.d/rvm.sh; bundle exec rake assets:precompile"
Can anyone possibly explain how I can get the data container volume to mount as writeable by the rails user. I'd very much like to avoid running any of the Ruby processes as root, even inside a container.
For some context I should also mention that I'm developing the images in Docker in boot2docker on Mac OS X, then running them on a Google Compute Engine instance on an Ubuntu 14.04 host. Thanks!
I would modify your image a little bit. Write a shell script that wraps the /usr/bin/start-server command in your fig.yml and place that inside your container.
Then you can chown rails anything that you need before starting up your server.
Running the container with a default user rails is not really needed either, as long as you start up the server as the rails user: sudo -u rails /usr/bin/start-server (or something like that).
Personally haven't used the litdistco-base image yet, so do not know all the specifics on how it works.
I think you need to modify the litdistco-base image in the following way so both directories are owned by rails:
# Start from LitDistCo base image
FROM litdistco-base
MAINTAINER Mark Bennett <mark#burmis.ca>
RUN mkdir -p /rails/log
RUN mkdir -p /rails/public/system
RUN chown -R rails:rails /rails/log /rails/public/system
USER rails
# Setup volumes used in production
VOLUME ["/rails/log", "/rails/public/system"]
# Build the application assets
WORKDIR /rails
RUN /bin/bash -l -c "touch /rails/log/production.log; chmod 0666 /rails/log/production.log"
RUN /bin/bash -l -c "source /etc/profile.d/rvm.sh; bundle exec rake assets:precompile"

Resources