I am trying to "dockerize" an existing Rails development app. This is my first time experimenting with Docker.
I want to set up Guard to listen for file changes and run relevant specs.
The Guard service appears to be running correctly, and the logs show:
guard_1 | 16:35:12 - INFO - Guard is now watching at '/app'
But when I edit/save spec files Guard is not running any tests.
This is an existing app that I'm moving into Docker. It has a guardfile that works outside of Docker.
I've searched and read a number of posts (e.g. this one), but I'm not sure where to start debugging this. Can anyone point me in the right direction and get Guard listening to file changes.
My docker-compose.yml looks like this:
version: '3'
services:
postgres:
ports:
- "5432:5432"
volumes:
- $HOME/postgres-data:/var/lib/postgresql
image: postgres:9.6.9
redis:
ports:
- "6379:6379"
depends_on:
- postgres
image: redis:5.0-rc
web:
build: .
ports:
- "3000:3000"
command: /bin/sh -c "rails s -b 0.0.0.0 -p 3000"
depends_on:
- postgres
- redis
env_file:
- .env
guard:
build: .
env_file:
- .env
command: bundle exec guard --no-bundler-warning --no-interactions
sidekiq:
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- postgres
- redis
env_file:
- .env
volumes:
redis:
postgres:
sidekiq:
guard:
Guardfile
guard 'spring', bundler: true do
watch('Gemfile.lock')
watch(%r{^config/})
watch(%r{^spec/(support|factories)/})
watch(%r{^spec/factory.rb})
end
guard :rspec, cmd: "bundle exec rspec" do
require "guard/rspec/dsl"
dsl = Guard::RSpec::Dsl.new(self)
# RSpec files
rspec = dsl.rspec
watch(rspec.spec_files)
# Ruby files
ruby = dsl.ruby
dsl.watch_spec_files_for(ruby.lib_files)
# Rails files
rails = dsl.rails(view_extensions: %w(erb haml slim))
dsl.watch_spec_files_for(rails.app_files)
dsl.watch_spec_files_for(rails.views)
watch(rails.controllers) do |m|
[
rspec.spec.call("routing/#{m[1]}_routing"),
rspec.spec.call("controllers/#{m[1]}_controller"),
rspec.spec.call("acceptance/#{m[1]}")
]
end
# Rails config changes
watch(rails.spec_helper) { rspec.spec_dir }
watch(rails.routes) { "#{rspec.spec_dir}/routing" }
watch(rails.app_controller) { "#{rspec.spec_dir}/controllers" }
# Capybara features specs
watch(rails.view_dirs) { |m| rspec.spec.call("features/#{m[1]}") }
watch(rails.layouts) { |m| rspec.spec.call("features/#{m[1]}") }
# Turnip features and steps
watch(%r{^spec/acceptance/(.+)\.feature$})
watch(%r{^spec/acceptance/steps/(.+)_steps\.rb$}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || "spec/acceptance"
end
ignore %r{^spec/support/concerns/}
end
guard 'brakeman', :run_on_start => true do
watch(%r{^app/.+\.(erb|haml|rhtml|rb)$})
watch(%r{^config/.+\.rb$})
watch(%r{^lib/.+\.rb$})
watch('Gemfile')
end
I'm assuming you're making changes on your local filesystem and expected guard, inside the container, to trigger.
If so, the missing link is your docker-compose.yml file.
guard:
build: .
env_file:
- .env
command: bundle exec guard --no-bundler-warning --no-interactions
volumes:
- .:/app
You need to mount the volume of your root (Rails root) directory inside the container so that the changes are reflected. Without this line, your container(s) only sees what was available at build time, and not the changes.
I faced the exact same problem and have been wondering about existing solutions that didn't really work. The key solution to your problem (if you are on OSX of course) is to understand the difference between "Docker Toolbox" and "Docker for Mac".
This article gives a lot of insight: https://docs.docker.com/docker-for-mac/docker-toolbox/
TL;DR
If you are on Mac, you need to use Docker for Mac to have the benefits of osxfs. If you do this you will not need docker-sync!
Related
As the title says, i have 3 containers running in docker, 1 for rails, 1 for a postgres db and 1 for redis. I'm able to enqueue jobs doing Job.perform_async but for some reason my jobs stay on the enqueued indefinitely. I checked and my Redis container is up and running.
My Job:
class HardJob
include Sidekiq::Job
def perform(*args)
puts 'HardJob'
end
end
The initializer for sidekiq:
Sidekiq.configure_server do |config|
config.redis = { url: (ENV["REDIS_URL"] || 'redis://localhost:6379') }
end
Sidekiq.configure_client do |config|
config.redis = { url: (ENV["REDIS_URL"] || 'redis://localhost:6379') }
end
My docker-compose:
version: '3.0'
services:
web:
build: .
entrypoint: >
bash -c "
rm -f tmp/pids/server.pid
&& bundle exec rails s -b 0.0.0.0 -p 3000"
ports:
- 3000:3000
volumes:
- .:/src/myapp
depends_on:
- db
- redis
links:
- "db:db"
environment:
REDIS_URL: 'redis://redis:6379'
db:
image: postgres:11
environment:
POSTGRES_PASSWORD: 'postgres'
volumes:
- db_data:/var/lib/postgresql/data
ports:
- 5432:5432
redis:
image: "redis"
volumes:
db_data:
redis:
driver: local
And i also set config.active_job.queue_adapter = :sidekiq in my 3 environments.
Any hint of what could be happening here? Thanks in advance
Update
Seems that running sidekiq -q default in my rails terminal worked. How can i configure Docker to always run sidekiq?
Sidekiq is process on it own and needs to be started on it own, just like the web server process. Add something like the following to docker-compose:
sidekiq:
depends_on:
- 'db'
- 'redis'
build: .
command: bundle exec sidekiq
volumes:
- .:/src/myapp
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/1
Or – when you are able to use the latest version of Sidekiq (>= 7.0) – you might want try out the new Sidekiq embedded mode that runs Sidekiq in together with your puma webserver.
Sidekiq is looking for the wrong queue name for some reason. Try adding this
to your config/sidekiq.yml file.
:queues:
- default
This is my docker-compose.yml for my Rails app.
As you can see from the command:, I put the rails s into background then run webpack-dev-server so that my assets get compiled quickly during development.
version: '3'
services:
app:
depends_on:
- 'db'
build:
context: .
dockerfile: ./docker/app/Dockerfile
command: bash -c "rm -f tmp/pids/server.pid &&
bundle exec rails s -p 3000 -b '0.0.0.0' & ./bin/webpack-dev-server"
ports:
- '3000:3000'
env_file: .env
# For byebug to work
stdin_open: true
tty: true
This worked fine for me, until I want to debug using byebug. When I docker attach it just doesn't give me console to interact. And so I have to comment out the & ./bin/webpack-dev-server part every time I debug.
So how do you usually run the webpack-dev-server when dev on docker?
My docker compose file
web:
build: .
command: bundle exec rails s -b 0.0.0.0 -p 3000
volumes:
- .:/app
ports:
- "3000:3000"
links:
- db
- db-test
depends_on:
- db
- db-test
Then I usually login into the container with
docker-compose exec web bash
and then run
rake jobs:work
Is it possible to run both things and skip the last step?
If you have two long-running tasks you want to do on the same code base, you can run two separate containers off the same image. In Docker Compose syntax that would look like
version: '3'
services:
db: { ... }
db-test: { ... }
web:
build: .
command: bundle exec rails s -b 0.0.0.0 -p 3000
ports:
- "3000:3000"
depends_on:
- db
- db-test
worker:
build: .
command: bundle exec rake jobs:work
depends_on:
- db
- db-test
Running multiple tasks in one container is kind of tricky and isn't usually recommended. The form you arrived on in comments, for example, launches the Rails server as a background task, and then makes the worker the main container process; if for some reason the main Rails application dies, Docker won't notice this, and if the worker dies, it will take the Rails application with it.
i have an Ruby on Rails project, which i want to place into the containers( there are database, redis and web(includes rails project) containers). I want to add search feature, so i added a sphinx container in my compose file
docker-compose.yml
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
**- sphinx**
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
**sphinx:
image: centurylink/sphinx**
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
docker-compose build works fine but when i run docke-compose up i get
ERROR: Cannot start container 096410dafc86666dcf1ffd5f60ecc858760fb7a2b8f2352750f615957072d961: Cannot link to a non running container: /metartaf_sphinx_1 AS /metartaf_web_1/sphinx_1
How can i fix this ?
According to https://hub.docker.com/r/centurylink/sphinx/ the Sphinx container runs needs some amount of configuration files to run properly. See the *Daemonized usage (2). You need data source files and a configuration.
In my test, it fails to start as is with error:
FATAL: no readable config file (looked in /usr/local/etc/sphinx.conf, ./sphinx.conf)
Your docker-compose.yml shouldn't have these * in it.
If you want sphinx latest version you can do this:
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
- sphinx
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
sphinx:
image: centurylink/sphinx:latest
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
If you want a specific version you write this way : centurylink/sphinx:2.1.8
I am following https://semaphoreci.com/community/tutorials/dockerizing-a-ruby-on-rails-application to create a sample rails app from rails docker image. Idea is to dockerize a rail application. I have creates a .drkiq.env file in rails app root directory in docker's recommended format of KEY=value as given below
SECRET_TOKEN=asecuretokenwouldnormallygohere
WORKER_PROCESSES=1
LISTEN_ON=0.0.0.0:8000
DATABASE_URL=postgresql://drkiq:yourpassword#postgres:5432/drkiq?encoding=utf8&pool=5&timeout=5000
CACHE_URL=redis://redis:6379/0
JOB_WORKER_URL=redis://redis:6379/0
I am reading the environment file from my docker=compose.yml file (also residing in app root directory)
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: drkiq
POSTGRES_PASSWORD: yourpassword
ports:
- '5432:5432'
volumes:
- drkiq-postgres:/var/lib/postgresql/data
redis:
image: redis:3.0.5
ports:
- '6379:6379'
volumes:
- drkiq-redis:/var/lib/redis/data
drkiq:
build: .
links:
- postgres
- redis
volumes:
- .:/drkiq
ports:
- '8000:8000'
env_file:
- .drkiq.env
sidekiq:
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
links:
- postgres
- redis
volumes:
- .:/drkiq
env_file:
- .drkiq.env
Inside my Dockerfile (residing in app root directory), I am running unicorn server
CMD bundle exec unicorn -c config/unicorn.rb
But when I run command
docker-compose up
and access http://my-host:8000/ It gives me "RuntimeError at /
Missing secret_token and secret_key_base for 'development' environment, set these values in config/secrets.yml" error. I am not sure what I am missing here.
My bad. in my rails app I was actually looking for SECRET_KEY_BASE variable but in .drkiq.env file (as pasted above) I was setting SECRET_TOKEN. I replaced SECRET_TOKEN with SECRET_KEY_BASE and restarted docker and every thing was shiny and warm.