limit logs amount for docker-compose - docker

i have a rails project which i start with
docker-compose up
however each time i start it, docker-compose outputs logs for all previous containers, and its increasingly more and more...
how do i limit the output of logging on startup?
here's my docker-compose.yml, it it helps...
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
RAILS_ENV: development
volumes:
- .:/rcd
ports:
- "3000:3000"
external_links:
- postgres:db
volumes_from:
- bundle
bundle:
image: rcd_web
command: echo "hi"
volumes:
- /bundle
and here's Dockerfile:
FROM ruby:2.1
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV APP_HOME /rcd
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile* $APP_HOME/
ENV BUNDLE_GEMFILE=$APP_HOME/Gemfile \
BUNDLE_JOBS=2 \
BUNDLE_PATH=/bundle
RUN bundle install --without production development test
ADD . $APP_HOME
ENV PATH ~/bin:$PATH
i can of course remove all old containers with:
docker rm `docker ps -aq`
but i dont want to do it on each startup..
here's f.ex. logging after three stop/start
~/workspace/rcd$ docker-compose up
Starting rcd_bundle_1...
Starting rcd_web_1...
Attaching to rcd_bundle_1, rcd_web_1
bundle_1 | hi
bundle_1 | hi
bundle_1 | hi
web_1 | => Booting WEBrick
web_1 | => Rails 4.2.4 application starting in development on http://0.0.0.0:3000
web_1 | => Run `rails server -h` for more startup options
web_1 | => Ctrl-C to shutdown server
web_1 | Exiting
web_1 | [2015-11-17 11:52:16] INFO WEBrick 1.3.1
web_1 | [2015-11-17 11:52:16] INFO ruby 2.1.7 (2015-08-18) [x86_64-linux]
web_1 | [2015-11-17 11:52:16] INFO WEBrick::HTTPServer#start: pid=1 port=3000
web_1 | [2015-11-17 11:52:16] FATAL SignalException: SIGTERM
web_1 | /usr/local/lib/ruby/2.1.0/webrick/server.rb:170:in `select'
web_1 | /usr/local/lib/ruby/2.1.0/webrick/server.rb:170:in `block in start'
web_1 | /usr/local/lib/ruby/2.1.0/webrick/server.rb:32:in `start'
web_1 | /usr/local/lib/ruby/2.1.0/webrick/server.rb:160:in `start'
web_1 | /bundle/gems/rack-1.6.4/lib/rack/handler/webrick.rb:34:in `run'
web_1 | /bundle/gems/rack-1.6.4/lib/rack/server.rb:286:in `start'
web_1 | /bundle/gems/railties-4.2.4/lib/rails/commands/server.rb:80:in `start'
web_1 | /bundle/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:80:in `block in server'
web_1 | /bundle/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:75:in `tap'
web_1 | /bundle/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:75:in `server'
web_1 | /bundle/gems/railties-4.2.4/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
web_1 | /bundle/gems/railties-4.2.4/lib/rails/commands.rb:17:in `<top (required)>'
web_1 | bin/rails:8:in `require'
web_1 | bin/rails:8:in `<main>'
web_1 | [2015-11-17 11:52:16] INFO going to shutdown ...
web_1 | [2015-11-17 11:52:16] INFO WEBrick::HTTPServer#start done.
web_1 | => Booting WEBrick
web_1 | => Rails 4.2.4 application starting in development on http://0.0.0.0:3000
web_1 | => Run `rails server -h` for more startup options
web_1 | => Ctrl-C to shutdown server
web_1 | Exiting
web_1 | [2015-11-17 11:52:22] INFO WEBrick 1.3.1
web_1 | [2015-11-17 11:52:22] INFO ruby 2.1.7 (2015-08-18) [x86_64-linux]
web_1 | [2015-11-17 11:52:22] INFO WEBrick::HTTPServer#start: pid=1 port=3000

I think this issue was fixed in docker-compose 1.5. You'll only get logs from the time to start the container.

Related

why does yarn --watch exit (send SIGTERM)

I have a Docker installation that I would like to start with docker compose up (and not have to run 2 extra ttys ) so I added a Procfile.dev looking like this
web: bin/rails server -p 3000 -b '0.0.0.0'
js: yarn build_js --watch
css: yarn build_css --watch
The output is, however, less than enjoyable
√ mindling % docker compose up
[+] Running 3/0
⠿ Container mindling_redis Running 0.0s
⠿ Container mindling_db Running 0.0s
⠿ Container mindling_mindling_1 Created 0.0s
Attaching to mindling_db, mindling_1, mindling_redis
mindling_1 | 19:54:04 web.1 | started with pid 16
mindling_1 | 19:54:04 js.1 | started with pid 19
mindling_1 | 19:54:04 css.1 | started with pid 22
mindling_1 | 19:54:06 css.1 | yarn run v1.22.17
mindling_1 | 19:54:06 js.1 | yarn run v1.22.17
mindling_1 | 19:54:06 js.1 | $ esbuild app/javascript/*.* --bundle --outdir=app/assets/builds --watch
mindling_1 | 19:54:06 css.1 | $ tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css --watch
mindling_1 | 19:54:08 js.1 | Done in 2.02s.
mindling_1 | 19:54:08 js.1 | exited with code 0
mindling_1 | 19:54:08 system | sending SIGTERM to all processes
mindling_1 | 19:54:08 web.1 | terminated by SIGTERM
mindling_1 | 19:54:09 css.1 | terminated by SIGTERM
mindling_1 exited with code 0
I've tried running a Bash in the application container - and calling the Procfile in a tty by itself looks more or less like this:
root#facfb249dc6b:/app# foreman start -f Procfile.dev
20:11:45 web.1 | started with pid 12
20:11:45 js.1 | started with pid 15
20:11:45 css.1 | started with pid 18
20:11:48 css.1 | yarn run v1.22.17
20:11:48 js.1 | yarn run v1.22.17
20:11:48 css.1 | $ tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css --watch
20:11:49 js.1 | $ esbuild app/javascript/*.* --bundle --outdir=app/assets/builds --watch
20:11:50 js.1 | [watch] build finished, watching for changes...
20:11:53 web.1 | => Booting Puma
20:11:53 web.1 | => Rails 7.0.0 application starting in development
20:11:53 web.1 | => Run `bin/rails server --help` for more startup options
20:11:57 web.1 | Puma starting in single mode...
20:11:57 web.1 | * Puma version: 5.5.2 (ruby 3.0.3-p157) ("Zawgyi")
20:11:57 web.1 | * Min threads: 5
20:11:57 web.1 | * Max threads: 5
20:11:57 web.1 | * Environment: development
20:11:57 web.1 | * PID: 22
20:11:57 web.1 | * Listening on http://0.0.0.0:3000
20:11:57 web.1 | Use Ctrl-C to stop
20:11:58 css.1 |
20:11:58 css.1 | Rebuilding...
20:11:59 css.1 | Done in 1066ms.
^C20:13:23 system | SIGINT received, starting shutdown
20:13:23 web.1 | - Gracefully stopping, waiting for requests to finish
20:13:23 web.1 | === puma shutdown: 2021-12-22 20:13:23 +0000 ===
20:13:23 web.1 | - Goodbye!
20:13:23 web.1 | Exiting
20:13:24 system | sending SIGTERM to all processes
20:13:25 web.1 | terminated by SIGINT
20:13:25 js.1 | terminated by SIGINT
20:13:25 css.1 | terminated by SIGINT
root#facfb249dc6b:/app#
What is going on? It works when doing it 'by hand' but if I let docker-compose rip the processes somehow terminates!?!
I have isolated the issue to the build_css script in package.json (or at least it does keep going if I comment that line in the Procfile.dev)
All the 'dirty linen'
My package.json looks like this
{
...8<...
"scripts": {
"build_js": "esbuild app/javascript/*.* --bundle --outdir=app/assets/builds",
"build_css": "tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/application.css"
},
...8<...
}
My containers are exceptionally boring, looking like almost everybody else's:
FROM ruby:3.0.3
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y nodejs yarn
WORKDIR /app
COPY src/Gemfile /app/Gemfile
COPY src/Gemfile.lock /app/Gemfile.lock
RUN gem install bundler foreman && bundle install
EXPOSE 3000
ENTRYPOINT [ "entrypoint.sh" ]
version: "3.9"
db:
build: mysql
image: mindling_db
container_name: mindling_db
command: [ "--default-authentication-plugin=mysql_native_password" ]
ports:
- "3306:3306"
volumes:
- ~/src/mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: mindling_development
mindling:
platform: linux/x86_64
build: .
volumes:
- ./src:/app
ports:
- "3000:3000"
depends_on:
- db
and finally my entrypoint.sh
#!/usr/bin/env bash
rm -rf /app/tmp/pids/server.pid
foreman start -f Procfile.dev
Allow me to give credit to they who deserve it!! The correct answer was provided by earlopain in this issue on rails/rails
It's actually an almost embarrassingly easy fix - once you know it :)
Add tty: true to your docker-compose.yml - like this
mindling:
platform: linux/x86_64
build: .
tty: true
volumes:
- ./src:/app
ports:
- "3000:3000"
depends_on:
- db
Thanks Earlopain & #walt_die, you saved my day. Writing this answer because I had a bit of explanation which didn't fit in the comment.
Just like yours, when trying to run rails in docker using docker-compose the problem I was facing was that CMD bin/dev in dockerfile was constantly crashing, although it worked when ran manually via bash.
The issue was not with tailwindcss but esbuild instead. This line js: yarn build --watch in Procfile.dev was failing because it runs esbuild app/javascript/*.* --bundle --sourcemap --outdir=app/assets/builds --public-path=assets under the hood, and as mentioned by evanw in esbuild issue esbuild exits when stdin is closed.
So, the solution of adding tty: true to docker-compose.yml as above works.
Alternatively, one can also remove/comment out this line js: yarn build --watch from Procfile.dev works. But this won't compile the JS changes. So, one can jump into bash of the running container and manually run yarn build --watch

Docker compose status: stopping

I'm trying to put Flask API app to the docker container. All work fine for building docker image as well running from docker compose except when I will do docker-compose up -d it will show status of a docker compose as "stopping" when a container under it shows as "running"
Current Dockerfile looks like
FROM python:3.7.7-alpine3.11
COPY app /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5555
ENTRYPOINT ["python3"]
CMD ["app.py"]
and docker-compose.yml
version: '3'
services:
app:
build: .
ports:
- "3000:5555"
volumes:
- ./app:/app
Docker compose logs:
Attaching to python-api_app_1
app_1 | DEBUG:root:Starting app
app_1 | * Serving Flask app "app" (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: This is a development server. Do not use it in a production deployment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | INFO:werkzeug: * Running on http://0.0.0.0:5555/ (Press CTRL+C to quit)
app_1 | INFO:werkzeug: * Restarting with stat
app_1 | DEBUG:root:Starting app
app_1 | WARNING:werkzeug: * Debugger is active!
app_1 | INFO:werkzeug: * Debugger PIN: 791-950-860
app_1 | DEBUG:root:Starting app
app_1 | * Serving Flask app "app" (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: This is a development server. Do not use it in a production deployment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | INFO:werkzeug: * Running on http://0.0.0.0:5555/ (Press CTRL+C to quit)
app_1 | INFO:werkzeug: * Restarting with stat
app_1 | DEBUG:root:Starting app
app_1 | WARNING:werkzeug: * Debugger is active!
app_1 | INFO:werkzeug: * Debugger PIN: 791-950-860
Any tips on that case why it is reported that way?

Having issues running bundle install with docker container

I have an existing Rails application that I am just trying to turn into a docker instance. I've went through a few tutorials on dockerizing an existing rails app, but I keep getting stuck somewhere.
Here's my Dockerfile:
FROM ruby:2.5.1
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
I confirmed that my Gemfile.lock file in the local directory is empty.
Finally, here's my docker-compose.yml file:
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_HOST_AUTH_METHOD: trust
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ./:/myapp
ports:
- "3000:3000"
depends_on:
- db
After running docker-compose build, it goes through the whole process of pulling down Ruby, postgresql, doing bundle install, etc... and then finally when I try to run docker-compose up, I get the following error:
web_1 | Bundler::GitError: The git source https://github.com/zdennis/activerecord-import is not yet checked out. Please run `bundle install` before trying to start your application
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/git/git_proxy.rb:235:in `allowed_in_path'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/git/git_proxy.rb:192:in `find_local_revision'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/git/git_proxy.rb:64:in `revision'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/git.rb:225:in `revision'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/git.rb:93:in `install_path'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/path.rb:126:in `expanded_path'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/path.rb:163:in `load_spec_files'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/git.rb:200:in `load_spec_files'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/path.rb:100:in `local_specs'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/source/git.rb:167:in `specs'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/definition.rb:759:in `block in converge_locked_specs'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/definition.rb:745:in `each'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/definition.rb:745:in `converge_locked_specs'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/definition.rb:248:in `resolve'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/definition.rb:171:in `specs'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/definition.rb:238:in `specs_for'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/definition.rb:227:in `requested_specs'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/runtime.rb:108:in `block in definition_method'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/runtime.rb:20:in `setup'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler.rb:107:in `setup'
web_1 | /usr/local/lib/ruby/gems/2.5.0/gems/bundler-1.16.6/lib/bundler/setup.rb:20:in `<top (required)>'
web_1 | /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
web_1 | /usr/local/lib/ruby/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require'
web_1 | bundler: failed to load command: rails (/usr/local/bundle/bin/rails)
What am I missing here? I would like to use an existing volume as myapp (not sure if there's any reason I shouldn't want to do this -- perhaps it runs faster if it's within the container itself?)
The only way I can get this to work properly is if I run docker-compose build, followed by docker-compose run web bundle install, followed by docker-compose up
Am I doing something wrong that's requiring me to have to run docker-compose run web bundle install? I saw it run when I ran docker-compose build, so not sure why it's required again.

connection refused for Docker development environment

here are my configurations:
docker-compose.yml
---
web:
build: .
command: RAILS_ENV=production bundle exec rake assets:precompile --trace
command: foreman start
ports:
- "3000:3000"
links:
- postgres
environment:
- RAILS_ENV=production
- RACK_ENV=production
- POSTGRES_DATABASE=postgres
- POSTGRES_USERNAME=postgres
- POSTGRES_HOST=db
postgres:
image: postgres
Procfile
web: bundle exec puma -e _env:RAILS_ENV -C config/puma.rb
nginx: /usr/sbin/nginx -g 'daemon off;'
Dockerfile
# Generated by Cloud66 Starter
FROM ruby:2.2.3
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get -y install curl \
git \
imagemagick \
libmagickwand-dev \
libcurl4-openssl-dev \
nodejs \
postgresql-client
# Installing your gems this way caches this step so you dont have to reintall your gems every time you rebuild your image.
# More info on this here: http://ilikestuffblog.com/2014/01/06/how-to-skip-bundle-install-when-deploying-a-rails-app-to-docker/
# Copy the Gemfile and Gemfile.lock into the image.
# Temporarily set the working directory to where they are.
WORKDIR /tmp
ADD Gemfile Gemfile
ADD Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
# Install and configure nginx
RUN apt-get install -y nginx
RUN rm -rf /etc/nginx/sites-available/default
ADD config/nginx.conf /etc/nginx/nginx.conf
# Add our source files precompile assets
ENV APP_HOME /app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
ADD . $APP_HOME
# RUN RAILS_ENV=production bundle exec rake assets:precompile --trace
I build docker container with docker-compose and was successful:
docker-compose build
And here is output for docker-compose up
docker-compose up
⇒ docker-compose up
Starting watchhound_postgres_1
Starting watchhound_web_1
Attaching to watchhound_postgres_1, watchhound_web_1
postgres_1 | LOG: database system was interrupted; last known up at 2016-06-24 08:58:25 UTC
postgres_1 | LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | LOG: invalid record length at 0/1707C48
postgres_1 | LOG: redo is not required
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
web_1 | 09:04:46 web.1 | started with pid 6
web_1 | 09:04:46 nginx.1 | started with pid 7
web_1 | 09:04:47 web.1 | [6] Puma starting in cluster mode...
web_1 | 09:04:47 web.1 | [6] * Version 3.4.0 (ruby 2.2.3-p173), codename: Owl Bowl Brawl
web_1 | 09:04:47 web.1 | [6] * Min threads: 5, max threads: 5
web_1 | 09:04:47 web.1 | [6] * Environment: _env:RAILS_ENV
web_1 | 09:04:47 web.1 | [6] * Process workers: 1
web_1 | 09:04:47 web.1 | [6] * Phased restart available
web_1 | 09:04:47 web.1 | [6] * Listening on tcp://0.0.0.0:5000
web_1 | 09:04:47 web.1 | [6] * Listening on unix:///var/run/puma.sock
web_1 | 09:04:47 web.1 | [6] Use Ctrl-C to stop
web_1 | 09:04:49 web.1 | [6] - Worker 0 (pid: 12) booted, phase: 0
PROBLEM
Everything looks fine, but when I visit 192.168.99.100:5000 (from docker-machine ip) the browser says 192.168.99.100 refused to connect
Not sure what am I missing
My problem was with docker-compose.yml file, need to bind port 5000 not 3000 as per my overall configuration.

Error "Could not find rake-10.5.0 in any of the sources" on Phusion Passenger Docker image

I am trying to deploy a Rails app using Docker and the Phusion Passenger Ruby base image, but whenever I try to access the app from the browser I get this error:
web_1 | [ 2016-02-08 04:18:44.6861 31/7ff292141700 age/Cor/App/Implementation.cpp:304 ]: Could not spawn process for application /home/app/webapp: An error occurred while starting up the preloader.
web_1 | Error ID: d3103e16
web_1 | Error details saved to: /tmp/passenger-error-EwymlW.html
web_1 | Message from application: <p>It looks like Bundler could not find a gem. Maybe you didn't install all the gems that this application needs. To install your gems, please run:</p>
web_1 |
web_1 | <pre class="commands">bundle install</pre>
web_1 |
web_1 | <p>If that didn't work, then the problem is probably caused by your application being run under a different environment than it's supposed to. Please check the following:</p>
web_1 |
web_1 | <ol>
web_1 | <li>Is this app supposed to be run as the <code>app</code> user?</li>
web_1 | <li>Is this app being run on the correct Ruby interpreter? Below you will
web_1 | see which Ruby interpreter Phusion Passenger attempted to use.</li>
web_1 | </ol>
web_1 |
web_1 | <p>-------- The exception is as follows: -------</p>
web_1 | Could not find rake-10.5.0 in any of the sources (Bundler::GemNotFound)
web_1 | <pre> /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/spec_set.rb:92:in `block in materialize'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/spec_set.rb:85:in `map!'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/spec_set.rb:85:in `materialize'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/definition.rb:140:in `specs'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/definition.rb:185:in `specs_for'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/definition.rb:174:in `requested_specs'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/environment.rb:18:in `requested_specs'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/runtime.rb:13:in `setup'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler.rb:127:in `setup'
web_1 | /var/lib/gems/2.2.0/gems/bundler-1.10.6/lib/bundler/setup.rb:18:in `<top (required)>'
web_1 | /usr/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
web_1 | /usr/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in `require'
web_1 | /usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:430:in `activate_gem'
web_1 | /usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:297:in `block in run_load_path_setup_code'
web_1 | /usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:435:in `running_bundler'
web_1 | /usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:296:in `run_load_path_setup_code'
web_1 | /usr/share/passenger/helper-scripts/rack-preloader.rb:100:in `preload_app'
web_1 | /usr/share/passenger/helper-scripts/rack-preloader.rb:156:in `<module:App>'
web_1 | /usr/share/passenger/helper-scripts/rack-preloader.rb:30:in `<module:PhusionPassenger>'
web_1 | /usr/share/passenger/helper-scripts/rack-preloader.rb:29:in `<main>'</pre>
web_1 |
web_1 |
web_1 | [ 2016-02-08 04:18:44.6935 31/7ff293143700 age/Cor/Con/CheckoutSession.cpp:277 ]: [Client 1-2] Cannot checkout session because a spawning error occurred. The identifier of the error is d3103e16. Please see earlier logs for details about the error.
This is my Dockerfile:
FROM phusion/passenger-ruby22:0.9.18
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# Enable Nginx/Passenger
RUN rm -f /etc/service/nginx/down
# Enable portals virtual host
RUN rm /etc/nginx/sites-enabled/default
COPY portals.conf /etc/nginx/sites-enabled/portals.conf
RUN mkdir /home/app/webapp
# Load env vars into nginx
COPY rails-env.conf /etc/nginx/main.d/rails-env.conf
# Install gems dependencies
COPY Gemfile* /tmp/
WORKDIR /tmp
RUN bundle install
# Copy rails app
WORKDIR /home/app/webapp
COPY . ./
RUN chown -R app:app ./
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
I tried running bundler as RUN bundle install --deployment but it didn't work as well. I am passing RAILS_ENV and PASSENGER_APP_ENV via the rails-env.conf file and they are both set to production (which is the default according to the Passenger image docs).
If I docker exec -it bash <ID> into the container and run gem list I see that all the gems are installed, so I don't know what's wrong.
This error is due to out of date software. Because the passenger images are not updated frequently it is important to bring everything up to date in your Dockerfile. This is how I generally setup a Dockerfile based off a phusion image:
FROM phusion/passenger-ruby22:0.9.18
ENV SYSTEM_UPDATE=1
RUN apt-get update \
&& apt-get upgrade -y -o Dpkg::Options::="--force-confold" \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /home/app
COPY Gemfile /home/app/Gemfile
COPY Gemfile.lock /home/app/Gemfile.lock
RUN gem update --system && \
gem update bundler && \
bundle install --jobs 4 --retry 5
# The rest of your app setup here
ENTRYPOINT ["/sbin/my_init", "--"]
SYSTEM_UPDATE is just a cache buster variable. When I bump that up all the packages will be updated on the next docker build. It should be bumped frequently.
I also ensure gem and bundler are fully up to date before running bundle install.
Also, there is no benefit to copying your Gemfile and Gemfile.lock to the tmp directory, just copy it to your application directory.
You can remove your final Clean up APT when done. command - that's really not the right place for that. There should be a single RUN line that runs all the apt-get commands in a single layer.
Take a look at https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/ for the best practices around setting up a Dockerfile, especially the sections about using apt-get.
For me, rake was there, I fix it using
rake rails:update
have fun!

Resources