I'm new to docker, and trying to workout why my Docker setup is hanging and not connecting like I expect it to.
I'm running
Docker version 18.09.2, build 6247962
docker-compose version 1.23.2, build 1110ad01
OSX 10.14.5
My setup is based on this Gist that I found.
I've reduced it somewhat, to better demonstrate the issue.
Dockerfile
FROM ruby:2.4
ARG DEBIAN_FRONTEND=noninteractive
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main" >> /etc/apt/sources.list.d/postgeresql.list \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& apt-get update \
&& apt-get update \
&& apt-get install -y --no-install-recommends apt-utils \
&& apt-get install -y build-essential \
&& apt-get install -y nodejs \
&& apt-get install -y --no-install-recommends \
postgresql-client-9.6 pv ack-grep ccze unp htop vim \
&& apt-get install -y libxml2-dev libxslt1-dev \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge -y --auto-remove
# Set environment
ENV APP_HOME /usr/src/app
ENV BUNDLER_VERSION 2.0.2
# Setup bundler
RUN gem install bundler -v $BUNDLER_VERSION
WORKDIR $APP_HOME
EXPOSE 7051
CMD ["bundle", "exec", "puma", "-p", "7051", "-C", "config/puma.rb"]
docker_compose.yml
version: '3.1'
services:
app: &app_base
build: .
working_dir: /usr/src/app
volumes:
- .:/usr/src/app
# to be able to forward ssh-agent to github through capistrano (bundle on server)
- "~/.ssh/id_rsa:/root/.ssh/id_rsa"
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
environment: &app_environment
# to keep bundle effect between container restarts (without rebuild):
BUNDLE_PATH: /usr/src/app/.bundle
BUNDLE_APP_CONFIG: /usr/src/app/.bundle
DATABASE_HOST: db
SSH_AUTH_SOCK: # this left empty copies from outside env
env_file: '.env'
ports:
- "7051:7051"
depends_on:
- db
db:
image: postgres:9.5.17
ports:
- "5432:5432"
environment:
POSTGRES_DB: my_project_development
POSTGRES_USER: root
POSTGRES_PASSWORD: root
config/database.yml
development:
adapter: postgresql
encoding: unicode
pool: 5
database: my_project_development
username: root
password: root
host: db
config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count
# Specifies the `port` that Puma will listen on to receive requests, default is 3000.
#
port ENV.fetch("PORT") { 7051 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
So what I'm doing is:
Running docker-compose build to first build the images & containers
Running docker-compose run --rm app bundle install to install the gems
Running docker-compose run --rm app bundle exec rake db:create db:migrate db:seed to create/migrate/seed the database
Step 3. is the step I am stuck on. It just hangs there with no feedback:
docker-compose run --rm app bundle exec rake db:create db:migrate db:seed
Starting my_project_db_1 ... done
I know the database is running, as I can connect to it locally.
I can also log into the app container, and connect via psql, so I know that the app container can talk to the db container:
docker exec -it f6d6edadaed4 /bin/bash (52s 699ms)
root#f6d6edadaed4:/usr/src/app# psql "postgresql://root:root#db:5432/my_project_development"
psql (9.6.14, server 9.5.17)
Type "help" for help.
my_project_development=# \dt
No relations found.
If I try to boot the app with docker-compose up, then it also just hangs:
app_1 | Puma starting in single mode...
app_1 | * Version 3.11.4 (ruby 2.4.6-p354), codename: Love Song
app_1 | * Min threads: 5, max threads: 5
app_1 | * Environment: ci
I.e. puma would normally show a 'listening' message once connected:
* Listening on tcp://0.0.0.0:7051
Use Ctrl-C to stop
But it's not getting to that point, it just hangs.
What could be going on? Why can't my Rails container just connect to the PostgreSQL container and have puma boot normally?
MORE INFORMATION:
I've learn't now, if I wait 10+ minutes, it does eventually boot!
During that 10 mins, my CPU fans are spinning like crazy, so it's really thinking about something.
But when it finishes, the CPU fans shut off, and puma has booted and I can access it locally at http://127.0.0.1:7051 like I would expect.
Why would it be so slow to startup? My machine is otherwise pretty fast.
I think Docker on OSX is just extremely slow. I've since read about some performance issues here
Adding a cached option to the volume seems to have reduced the boot time to ~2mins
version: '3.1'
services:
app: &app_base
build: .
working_dir: /usr/src/app
volumes:
- .:/usr/src/app:cached
...
Still not very acceptable in my opinion. Would love to know if there's anything else that can be done?
I found an actual working answer to this, which I also posted here: https://stackoverflow.com/a/58603025/172973
Basically, see the article here to see how to properly setup Dockerfile and docker-compose.yml, so that it performs well on OSX.
The main thing to understand:
To make Docker fast enough on MacOS follow these two rules: use :cached to mount source files and use volumes for generated content (assets, bundle, etc.).
So if anyone else comes across this, just follow the article or see my other answer.
Related
I am trying to set up my development environment in rails with docker compose. Getting an error saying
ActiveRecord::AdapterNotSpecified: 'development' database is not configured. Available: []
Dockerfile:
# syntax=docker/dockerfile:1
FROM ruby:2.5.8
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN apt-get install cron -y
RUN apt-get install vim -y
RUN export EDITOR="/usr/bin/vim"
RUN addgroup deploy && adduser --system deploy && adduser deploy deploy
USER deploy
WORKDIR /ewagers
RUN (crontab -l 2>/dev/null || true; echo "*/5 * * * * /config/schedule.rb -with args") | crontab -
COPY Gemfile .
COPY Gemfile.lock .
RUN gem install bundler -v 2.2.27
RUN bundle install
COPY . .
USER root
COPY docker-entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/docker-entrypoint.sh
COPY wait-for-it.sh /usr/bin/
RUN chmod +x /usr/bin/wait-for-it.sh
RUN chown -R deploy *
RUN chmod 644 app
RUN chmod u+x app
RUN whenever --update-crontab ewagers --set environment=production
COPY config/database.example.yml ./config/database.yml
RUN mkdir data
ARG RAILS_MASTER_KEY
RUN printenv
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
database.example.yml:
# database.yml
default: &default
adapter: postgresql
encoding: unicode
host: db
username: postgres
password: ewagers
pool: 5
development:
<<: *default
database: postgres
docker compose:
version: "3.9"
services:
app:
build: .
command: docker-entrypoint.sh
ports:
- 4000:3000
environment:
DB_URL: postgres://db/ewagers_dev # db is host, ewagers_dev is db name
RAILS_ENV: development
volumes:
- .:/ewagers # mapping our current directory to ewagers directory in the container
# - ewagers-sync:/ewagers:nocopy
image: ksun/ewagers:latest
depends_on:
- db
db:
image: postgres:12
volumes:
- ewagers_postgres_volume:/var/lib/postgresql/data # default storage location for postgres
environment:
POSTGRES_PASSWORD: ewagers
ports:
- 5432:5432 # default postgres port
volumes: # we specify a volume so postgres does not write data to temporary db of its container
ewagers_postgres_volume:
I have double-checked indentations and spacing, done a docker build to make sure the database.example.yml is being copied to database.yml. However it seems it can't even find my development configuration in database.yml.
What's interesting is if I have what's in my database.example.yml and create a database.yml file locally with the same contents, it will work. But it should work without that, since I am copying database.example.yml to databse.yml in the dockerfile.
I am trying to create my rails application in a docker environment. I have used volumes to mount source directories from the host at a targeted path inside the container. The application is in the development phase and I need to continuously add new gems to it. I install a gem from the bash of my running container, it installs the gem and the required dependencies. But when I removed the running containers(docker-compose down) and then again instantiated them(docker-compose up), my rails web image shows errors of missing gems. I know re-building the image will add the gems but IS THERE ANY WAY TO ADD GEMS WITHOUT REBUILDING THE IMAGE?
I Followed docker-compose docs for setting the rails app
https://docs.docker.com/compose/rails/#define-the-project
DOCKERFILE
FROM ruby:2.7.1-slim-buster
LABEL MAINTAINER "Prayas Arora" "<prayasa#mindfiresolutions.com>"
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update \
&& apt-get install -qq -y --no-install-recommends \
build-essential \
libpq-dev \
netcat \
postgresql-client \
nodejs \
&& rm -rf /var/lib/apt/lists/*
ENV APP_HOME /var/www/repository/repository_api
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY ./repository_api/Gemfile $APP_HOME/Gemfile
COPY ./repository_api/Gemfile.lock $APP_HOME/Gemfile.lock
RUN bundle install
# Copy the main application.
COPY ./repository_api $APP_HOME
# Add a script to be executed every time the container starts.
COPY ./repository_docker/development/repository_api/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["rails","server","-b","0.0.0.0"]
docker-compose.yml
container_name: repository_api
build:
context: ../..
dockerfile: repository_docker/development/repository_api/Dockerfile
user: $UID
env_file: .env
stdin_open: true
environment:
DB_NAME: ${POSTGRES_DB}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_USER: ${POSTGRES_USER}
DB_HOST: ${POSTGRES_DB}
volumes:
- ../../repository_api:/var/www/repository/repository_api
networks:
- proxy
- internal
depends_on:
- repository_db
A simple solution is to cache the gems in a docker volume. You can create a volume in docker and attach it to the path to bundle gems. This will maintain a shared state and you will not require to install the gems in every container you spun.
container_name: repository_api
build:
context: ../..
dockerfile: repository_docker/development/repository_api/Dockerfile
user: $UID
env_file: .env
stdin_open: true
environment:
DB_NAME: ${POSTGRES_DB}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_USER: ${POSTGRES_USER}
DB_HOST: ${POSTGRES_DB}
volumes:
- ../../repository_api:/var/www/repository/repository_api
- bundle_cache:/usr/local/bundle
networks:
- proxy
- internal
.
.
volumes:
bundle_cache:
Also, a/c to bundler.io, the official Docker images for Ruby assume that you will use only one application, with one Gemfile, and no other gems or Ruby applications will be installed or run in your container. So once you have added all the gems required in your application development, you can remove this bundle_cache volume and rebuild your image with your final Gemfile.
Below is the docker file in project's root directory:-
FROM ruby:2.2
MAINTAINER technologies.com
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get install -y libxml2-dev libxslt1-dev
RUN apt-get install -y libqt4-webkit libqt4-dev xvfb
RUN apt-get install -y nodejs
ENV INSTALL_PATH /as_app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile Gemfile
RUN bundle install
COPY . .
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Below is the contents in docker-compose.yml file in project's root directory :-
as_web:
build: .
environment:
- RAILS_ENV=development
- QUEUE=*
- REDIS_URL=redis://redis:6379
volumes:
- .:/as_app
ports:
- 3000:3000
links:
- as_mongo
- as_redis
command: rails server -b 0.0.0.0
as_mongo:
image: mongo:latest
ports:
- "27017:27017"
as_redis:
image: redis
ports:
- "6379:6379"
as_worker:
build: .
environment:
- QUEUE=*
- RAILS_ENV=development
- REDIS_URL=redis://redis:6379
volumes:
- .:/as_app
links:
- as_mongo
- as_redis
command: bundle exec rake environment resque:work
docker version 1.11.2, docker-machine version 0.8.0-rc1, docker-compose version 1.8.0-rc1, ruby 2.2.5, rails 4.2.4.
My problem is as:-
1) When I build the image with "docker-compose build" from project root directory the image builds successfully with gems installed.
2) But when I do "docker-compose up" the as_web and as_worker services exits with code 1 and 10 resp. giving error as no gemfile or .bundler found. When I login in image through bash and see the working directory then no project files are seen.
3) Knowledge I want to know is:-
i) when I start terminal, I start VirtualBox instance manually like "docker-machine start default"
ii) Then I execute command "eval $(docker-machine env dev)" to point current shell to virtualbox docker-daemon, So after this when i do "docker build -t as_web ." the terminal gives message like "sending current build context to docker daemon",
a) Is this message saying that build in being done in VirtualBox ?
if I do "docker-compose build" no such message like "sending...." appears,
B) Does docker-compose too point to docker daemon in virtual box or it's being build in localhost(myubuntuOS), I'm little bit confused?
Hoping you guys understood the details if you need any extra info. then let me know, Thanking you all. Happy Coding.
docker-compose build and docker build both do the same thing. They both use the docker engine API to build an image in the virtualbox. The output messages are just a little different.
Your problem is because of this:
volumes:
- .:/as_app
You're overriding the app directory with the project directory from the host. If you haven't run bundle install on the host, the files won't be in the container when it starts.
You can fix this by running docker-compose run as_app bundle install
I'm using Fig and attempting to use a data volume container to share uploaded files between a Rails web server and a Resque worker running in another container. To do this the data volume container defines a /rails/public/system volume which is meant to be used to share these files. The Rails and Resque processes run as a rails user in their respective containers which are both based of the markb/litdistco image. All together the fig.yml looks like this:
redis:
image: redis:2.8.17
volumes_from:
- file
web:
image: markb/litdistco
command: /usr/bin/start-server /opt/nginx/sbin/nginx
ports:
- 80:8000
- 443:4430
environment:
DATABASE_URL:
links:
- redis
volumes_from:
- file
worker:
image: markb/litdistco
command: /usr/bin/start-server "bundle exec rake environment resque:work QUEUE=litdistco_offline RAILS_ENV=production"
environment:
DATABASE_URL:
links:
- redis
volumes_from:
- file
file:
image: markb/litdistco
command: echo "datastore"
volumes:
- /var/redis
- /rails/log
- ./config/container/ssl:/etc/ssl
When the web and worker containers are running, I can see the /rails/public/system directory in both, however it is owned by the root user in both containers and the permissions on the directory prevent the rails user from writing to this directory.
For reference there are two Dockerfiles which go into making the markb/litdistco container. The first defines a base image I use for local development (Dockerfile):
# This Dockerfile is based on the excellent blog post by SteveLTN:
#
# http://steveltn.me/blog/2014/03/15/deploy-rails-applications-using-docker/
#
# KNOWN ISSUES:
#
# * Upgrading passenger or ruby breaks nginx directives with absolute paths
# Start from Ubuntu base image
FROM ubuntu:14.04
MAINTAINER Mark Bennett <mark#burmis.ca>
# Update package sources
RUN apt-get -y update
# Install basic packages
RUN apt-get -y install build-essential libssl-dev curl
# Install basics
RUN apt-get -y install tmux vim
RUN apt-get install -y libcurl4-gnutls-dev
# Install libxml2 for nokogiri
RUN apt-get install -y libxslt-dev libxml2-dev
# Install mysql-client
RUN apt-get -y install mysql-client libmysqlclient-dev
# Add RVM key and install requirements
RUN command curl -sSL https://rvm.io/mpapis.asc | gpg --import -
RUN curl -sSL https://get.rvm.io | bash -s stable
RUN /bin/bash -l -c "rvm requirements"
# Create rails user which will run the app
RUN useradd rails --home /rails --groups rvm
# Create the rails users home and give them permissions
RUN mkdir /rails
RUN chown rails /rails
RUN mkdir -p /rails/public/system
RUN chown rails /rails/public/system
# Add configuration files in repository to filesystem
ADD config/container/start-server.sh /usr/bin/start-server
RUN chown rails /usr/bin/start-server
RUN chmod +x /usr/bin/start-server
# Make a directory to contain nginx and give rails user permission
RUN mkdir /opt/nginx
RUN chown rails /opt/nginx
# Switch to rails user that will run app
USER rails
# Install rvm, ruby, bundler
WORKDIR /rails
ADD ./.ruby-version /rails/.ruby-version
RUN echo "gem: --no-ri --no-rdoc" > /rails/.gemrc
RUN /bin/bash -l -c "rvm install `cat .ruby-version`"
RUN /bin/bash -l -c "gem install bundler --no-ri --no-rdoc"
# Install nginx
RUN /bin/bash -l -c "gem install passenger --no-ri --no-rdoc"
RUN /bin/bash -l -c "passenger-install-nginx-module"
ADD config/container/nginx-sites.conf.TEMPLATE /opt/nginx/conf/nginx.conf.TEMPLATE
ADD config/container/set-nginx-paths.sh /rails/set-nginx-paths.sh
RUN /bin/bash -l -c "source /rails/set-nginx-paths.sh"
# Copy the Gemfile and Gemfile.lock into the image.
# Temporarily set the working directory to where they are.
WORKDIR /tmp
ADD Gemfile Gemfile
ADD Gemfile.lock Gemfile.lock
# bundle install
RUN /bin/bash -l -c "bundle install"
# Add rails project to project directory
ADD ./ /rails
# set WORKDIR
WORKDIR /rails
# Make sure rails has the right owner
USER root
RUN chown -R rails:rails /rails
# Publish ports
EXPOSE 3000
EXPOSE 4430
EXPOSE 8000
This is tagged as the litdistco-base image, then I use config/containers/production/Dockerfile to generate the image that I tag as markb/litdistco and run in staging and production.
# Start from LitDistCo base image
FROM litdistco-base
MAINTAINER Mark Bennett <mark#burmis.ca>
USER rails
# Setup volumes used in production
VOLUME ["/rails/log", "/rails/public/system"]
# Build the application assets
WORKDIR /rails
RUN /bin/bash -l -c "touch /rails/log/production.log; chmod 0666 /rails/log/production.log"
RUN /bin/bash -l -c "source /etc/profile.d/rvm.sh; bundle exec rake assets:precompile"
Can anyone possibly explain how I can get the data container volume to mount as writeable by the rails user. I'd very much like to avoid running any of the Ruby processes as root, even inside a container.
For some context I should also mention that I'm developing the images in Docker in boot2docker on Mac OS X, then running them on a Google Compute Engine instance on an Ubuntu 14.04 host. Thanks!
I would modify your image a little bit. Write a shell script that wraps the /usr/bin/start-server command in your fig.yml and place that inside your container.
Then you can chown rails anything that you need before starting up your server.
Running the container with a default user rails is not really needed either, as long as you start up the server as the rails user: sudo -u rails /usr/bin/start-server (or something like that).
Personally haven't used the litdistco-base image yet, so do not know all the specifics on how it works.
I think you need to modify the litdistco-base image in the following way so both directories are owned by rails:
# Start from LitDistCo base image
FROM litdistco-base
MAINTAINER Mark Bennett <mark#burmis.ca>
RUN mkdir -p /rails/log
RUN mkdir -p /rails/public/system
RUN chown -R rails:rails /rails/log /rails/public/system
USER rails
# Setup volumes used in production
VOLUME ["/rails/log", "/rails/public/system"]
# Build the application assets
WORKDIR /rails
RUN /bin/bash -l -c "touch /rails/log/production.log; chmod 0666 /rails/log/production.log"
RUN /bin/bash -l -c "source /etc/profile.d/rvm.sh; bundle exec rake assets:precompile"
I'm trying to get running my rails application with docker and fig, it counts with a redis server, mongodb, postgres, and nginx as well,
Here is how my fig.yml looks like:
pg:
image: docker-index.my.com/postgres
ports:
- 5432
redis:
image: docker-index.my.com/redis
ports:
- 6379
mongodb:
image: docker-index.my.com/mongodb
ports:
- 27017
app:
build: .
command: bundle exec rails s
volumes:
- .:/beesor
ports:
- 3000:3000
links:
- pg
- redis
- mongodb
environment:
RAILS_ENV: production
Everything works ok till the moment of starting app, as rails initializers hooks on server starts then I got errors regarding the database connection, the database does not exists! of course because it was not created on the Dockerfile (see below)
Dockerfile contents:
# DOCKER-VERSION 0.10.0
FROM docker-index.my.com/ruby:1.9.3
MAINTAINER my.com
RUN apt-get update -qq && apt-get install -y git-core xvfb curl nodejs libqt4-dev libgtk2.0-0 libgtkmm-3.0-1 libnotify4 sqlite3 libsqlite3-dev graphicsmagick imagemagick subversion libpq-dev libxml2-dev libxslt-dev git build-essential
RUN mkdir /my_app
WORKDIR /my_app
RUN gem install bundler
ADD Gemfile /my_app/Gemfile
ADD Gemfile.lock /my_app/Gemfile.lock
RUN bundle install
RUN bundle pack --all
ADD . /my_app
I don't see a place where I can put the rake db:create db:migrate db:seed commands!, if I put them at the end of the Dockerfile then when fig tries to build app it complains about the database server does not exits, (in the time that fig builds app container the other containers are not started), and I could not fix this changing the order on the fig.yml,
I'm facing egg-chicken problem here, who can I get this working?
I'm sure that all the links work perfectly so the problem is more about orchestration of scripts.
Found the solution!:
I created a rake task to wrap what I need, it runs migrations, seeds, and starts the rails server, so the fix is to change the command on fig by this one:
command: rake my_app:setup