ERR_CONNECTION_REFUSED by docker container - ruby-on-rails

I'm new to Docker and trying to make a demo Rails app. I made a dockerfile that looks like this:
FROM ruby:2.2
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update && apt-get install -y \
build-essential \
nodejs
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p /app
WORKDIR /app
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
# Copy the main application.
COPY . ./
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
I then built it (no errors):
docker build -t demo .
And then run it (also no errors):
docker run -itP demo
=> Booting Puma
=> Rails 5.1.1 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.8.2 (ruby 2.2.7-p470), codename: Sassy Salamander
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:9292
Use Ctrl-C to stop
When I run a docker ps command in a separate terminal to determine the ports, I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55e8224f7c15 demo "bundle exec rails..." About an hour ago Up About an hour 0.0.0.0:32772->3000/tcp ecstatic_bohr
However, when I try to connect to it at either http://localhost:32772 or http://192.168.99.100:32772 using Chrome or via a curl command, I receive a "Connection refused".
When I run the app outside of docker on my local machine via bundle exec rails server command, it works fine. Note that I am using Docker Toolbox on my Win7 machine
What could I be doing wrong ?

I spend a couple of hours on this as well and this thread was really helpful. What i'm doing right now is accessing those services through the vm's ip address.
You can get your vm's address running:
docker-machine ls
then try to access your service using the host mapped port 37772, something like this:
http://<VM IP ADDRESS>:32772
Hope this helps.

The combination of the above tricks worked--
I had to use http://<VM IP ADDRESS>:32772 (localhost:32772 did NOT work), AND I had to fix my exposed port to match the TCP listening port of 9292.
I still don't understand why the TCP listening port defaulted to 9292 instead of 3000, but I'll look into that separately.
Thank you for the help!

Related

docker-compose rails app not accessible on port 3000

I'm building docker containers for a simple rails/postgres app. The rails app has started and is listening on port 3000. I have exposed port 3000 for the rails container. However, http://localhost:3000 is responding with ERR_EMPTY_RESPONSE. I assumed that the rails container should be accessible on port 3000. Is there something else I need to do?
greg#MemeMachine ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eed45208bbda realestate_web "entrypoint.sh bash …" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp realestate_web_1
a9cb8cae310e postgres "docker-entrypoint.s…" About a minute ago Up About a minute 5432/tcp realestate_db_1
greg#MemeMachine ~ $ docker logs realestate_web_1
=> Booting Puma
=> Rails 6.0.2.2 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 3.12.4 (ruby 2.6.3-p62), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
greg#MemeMachine ~ $ curl http://localhost:3000
curl: (52) Empty reply from server
Dockerfile
FROM ruby:2.6.3
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN gem install bundler -v 2.0.2
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
env_file:
- '.env'
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
env_file:
- '.env'
entrypoint.sh
#!/bin/bash
# Compile the assets
bundle exec rake assets:precompile
# Start the server
bundle exec rails server
When you provide both an ENTRYPOINT and a CMD, Docker combines them together into a single command. If you just docker run your image as it's built, the entrypoint script gets passed the command part rails server -b 0.0.0.0 as command-line parameters; but it ignores this and just launches the Rails server itself (in this case, without the import -b 0.0.0.0 option).
The usual answer to this is to not try to run the main process directly in the entrypoint, but instead end the script with exec "$#" to run the command from additional arguments.
In this case, there are two additional bits. The command: in the docker-compose.yml file indicates that there's some additional setup that needs to be done in the entrypoint (you should not need to override the image's command to run the same server). You also need the additional environment setup that bundle exec provides. Moving this all into the entrypoint script, you get
#!/bin/sh
# ^^^ this script only uses POSIX shell features
# Compile the assets
bundle exec rake assets:precompile
# Clean a stale pid file
rm -f tmp/pids/server.pid
# Run the main container process, inside the Bundler context
exec bundle exec "$#"
Your Dockerfile can stay as it is; you can remove the duplicate command: from the docker-compose.yml file.
* Listening on tcp://localhost:3000
This logline makes me think rails is binding to only the localhost ip. This means that rails will only listen to requests from within the container. To make rails bind to all ips, and listen to requests from outside the container you use the rails server -b parameter. The last line in your entrypoint.sh should change to:
bundle exec rails server -b 0.0.0.0

ConnectionBad issue with a Rails 5 app running on google cloud run to a Google Cloud SQL instance via socket

When I use the google cloud run service my docker container will return the error:
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've enabled the cloud sql admin api on the relevant project. I ssh'ed into the instance that I was running with GCP services available in the google cloud shell, and checked /var/run/postgresql/.s.PGSQL.5432. There was nothing available. Google cloud run docks say to set the designation for the socket to under /cloudsql/, but no socket appears to exist there either.
Nothing in cloud sql/run open issues or the issue tracker suggests that this should be an issue.
Deploy command uses the --add-cloudsql-instances flag without error, so I believe there should be no issue there.
Relevant database.yml section:
staging:
adapter: postgresql
encoding: utf8
pool: 5
timeout: 5000
database: project_staging
username: project_staging
password: <%= Rails.application.credentials[:db_password] %>
socket: "/cloudsql/my-project-name:asia-northeast1:project-database-name/"
Dockerfile to set up the container -
FROM ruby:2.6.2
ARG environment
// Bunch of env code
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /usr/src/app
RUN gem install bundler
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install
COPY . .
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
ENV RAILS_LOG_TO_STDOUT=true
Do I need to install more than just postgresql-client here?
Almost certainly irrelevant, but the start script:
cd /usr/src/app
bundle exec rake db:create
bundle exec rake db:migrate
# Do some protective cleanup
> log/${RAILS_ENV}.log
rm -f tmp/pids/server.pid
bundle exec rails server -e ${RAILS_ENV} -b 0.0.0.0 -p $PORT
I'm honestly baffled here. Is it a configuration issue? A cloud run issue? Am I missing some kind of package? I expected it to just connect to the socket without issue on boot.
I have followed this Medium guide(parts 1, 2, 3 and 4) to create a Cloud Run with Ruby and connect it to a Cloud SQL instance with no problem at all, can you try to comparing it to your deploy or even try to follow the steps to see if what you did differs on what they explain there?
Also, in case that helps, there is a similar case I've found in another post where they were facing the same issue even though it's not deployed in Cloud Run, might be helpful. Another Medium post addresses this same issue too and gives a set of solutions.

Can't connect to Node inside Docker image

I've created an image using this Docker file...
FROM node:8
# Create application directory
WORKDIR /usr/src/app
# Install application dependencies
# By only copying the package.json file here, we take advantage of cached Docker layers
COPY package.json ./
RUN npm install
# This will install dev dependencies as well.
# If dev dependencies have been set, use --only-production when deploying to production
# Bundle app source code
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
But when I run it using $ docker run -d --rm -p 3000:3000 62 I can't cUrl the API running inside the container from the Docker host (OS X) using curl http://localhost:3000/About
If I exec into the container I get a valid response from the API via cUrl. Looks like a Linux firewall in the container but I don't see one running.
any ideas?
Your node server is most likely not listening on all interfaces, make sure it binds to 0.0.0.0 instead of 127.0.0.1

App running in Docker container on port 4567 can't be accessed from the outside

Updating the post with all files required to recreate the setup. – Still the same problem. Not able to access service running in container.
FROM python:3
RUN apt-get update
RUN apt-get install -y ruby rubygems
RUN gem install sinatra
WORKDIR /app
ADD . /app/
EXPOSE 4567
CMD ruby hei.rb -p 4567
hei.rb
require 'sinatra'
get '/' do
'Hello world!'
end
docker-compose.yml
version: '2'
services:
web:
build: .
ports:
- "4567:4567"
I'm starting the party by running docker-compose up --build .
docker ps returns:
0.0.0.0:4567->4567/tcp
Still, no respons from port 4567. Testing with curl from the host machine.
$ curl 127.0.0.1:4567 # and 0.0.0.0:4567
localhost:4567 replies within the containter
$ docker-compose exec web curl localhost:4567
Hello world!%`
What should I do to be able to access the Sinatra app running on port 4567?
Sinatra was binding to the wrong interface.
Fixed by adding the -o switch.
CMD ruby hei.rb -p 4567 -o 0.0.0.0
If there is no value assigned to the environment variable APP_ENV ( via (ENV['APP_ENV']), the default environment is ":development"
In a development environment with the run settings enabled, sinatra by default bind to the localhost interface of the running machine.
In order to make this service available outside this network, it needs to listen on all interfaces in the running environment. You can get this working by updating the default binding address as "0.0.0.0"
FROM ruby:latest
WORKDIR /usr/src/app/
ADD . /usr/src/app/
RUN bundle install
EXPOSE 4567
CMD ["ruby","app.rb","-o", "0.0.0.0"]

Docker exec. Container is not running. Dockerize rails

I never user docker before.
My final goal - to run chrome watir webdriver headlessly in Ruby on Rails app. Honestly, also new to RoR :)
I follow some manual to dockerize the simple project which is using 'watir-webdriver' and 'headless' gems.
https://www.packet.net/blog/how-to-run-your-rails-app-on-docker/
my Dockerfile
FROM ruby:latest
# Mount any shared volumes from host to container # /share
ENV HOME /home/rails/webapp
# Install dependencies
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
WORKDIR $HOME
# Install gems
ADD Gemfile* $HOME/
RUN bundle install
ADD . $HOME
CMD ["rails", "server", "--binding", "0.0.0.0"]
Steps i made:
Create simple rails new watir-app with postgresql support
add gems watir-webdriver,headless and their usage to one controller.
Generate docker image docker build -t watir-app . (no errors)
Run container docker run -d -p 3000:3000 watir-app (no errors)
the app is not available on http://localhost:3000 so trying to connect to container to investigate:
C:\Users\ttttt\RubymineProjects\watir-test>docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
868458c906c1 watir-app "rails server --bindi" 14 seconds ago Exited (1) 11 seconds ago adoring_volhard
C:\Users\ttttt\RubymineProjects\watir-test>docker exec adoring_volhard echo "1"
Error response from daemon: Container 868458c906c13928040caf4a18d6395f6b020b3eb40a1d693de84c006b9a2617 is not running
C:\Users\ttttt\RubymineProjects\watir-test>
Ruby: 2.2.5
Rails: 5.0.0.1
Docker for Win: 1.12.0
discovered docker logs command, traced the problem (needed to install nodejs in container)

Resources