I am trying to set up Mina's deploy.rb file. However, I am running into several hurdles.
My Rails 5 app uses Docker for the database (PostgreSQL) and Redis for background jobs. I am also using Phusion Passenger (Nginx) for the webserver.
This is what my deploy.rb file looks like at the moment:
require 'mina/rails'
require 'mina/git'
require 'mina/rvm'
set :application_name, 'App'
set :domain, 'my_app.com'
set :deploy_to, '/var/www/my_app.com'
set :repository, 'git#github.com:MyUser/my_app.git'
set :branch, 'master'
# Username in the server to SSH to.
set :user, 'myuser'
# shared dirs and files will be symlinked into the app-folder by the
# Not sure if this is necessary
'deploy:link_shared_paths' step.
set :shared_dirs, fetch(:shared_dirs, []).push('log', 'tmp/pids', 'tmp/sockets', 'public/uploads')
set :shared_files, fetch(:shared_files, []).push('config/database.yml', 'config/secrets.yml', 'config/puma.rb')
task :environment do
invoke :'rvm:use', 'ruby-2.4.1#default'
end
task :setup do
%w(database.yml secrets.yml puma.rb).each { |f| command %[touch "#{fetch(:shared_path)}/config/#{f}"] }
comment "Be sure to edit #{fetch(:shared_path)}/config/database.yml, secrets.yml and puma.rb."
end
desc "Deploys the current version to the server."
task :deploy do
deploy do
comment "Deploying #{fetch(:application_name)} to #{fetch(:domain)}:#{fetch(:deploy_to)}"
# Put things that will set up an empty directory into a fully set-up
# instance of your project.
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
invoke :'bundle:install'
comment 'Cleaning up Docker builds'
command 'docker stop $(docker ps -qa)'
command 'docker rm $(docker ps -qa)'
comment 'Stopping Docker'
command 'docker-compose stop'
comment 'Starting Docker'
command 'docker-compose up -d; sleep 5'
invoke :'rails:db_migrate'
invoke :'rails:assets_precompile'
invoke :'deploy:cleanup'
end
end
I came up with this file looking a bit here a bit there. It seems to run properly. However, here are the problems I am facing:
I am not able to run passenger-config restart-app $(pwd) or passenger-config restart-app $(#{fetch(:current_path)}) so I am having to restart Passenger by logging into the server and run the command manually for some reason. This kind of defeats the purpose of using Mina, which should automate the deploy process.
Once I start Passenger, I am seeing a database error like:
F, [2017-08-21T08:42:40.145292 #29048] FATAL -- : [28d9fb0f-f187-4d16-b3bc-f947c4ec726f]
F, [2017-08-21T08:42:40.145378 #29048] FATAL -- : [28d9fb0f-f187-4d16-b3bc-f947c4ec726f] ActiveRecord::StatementInvalid (PG::UndefinedTable: ERROR: relation "subscriptions" does not exist
LINE 8: WHERE a.attrelid = '"subscriptions"'::regclas...
But I am sure that this is not erring in dev so I think it might have to do with how I am deploying Docker in production.
Does anyone have an idea of how I can get a proper Docker + Passenger setup with Mina?
Just for the extra info, my docker-compose.yml file looks like this:
version: "2"
services:
postgres:
image: postgres:9.6
ports:
- "5432:5432"
environment:
POSTGRES_DB: "${DATABASE_NAME}"
POSTGRES_PASSWORD: "${DATABASE_PASSWORD}"
volumes:
- postgres-data:/var/lib/postgresql/data
redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
postgres-data:
driver: local
Thanks in advance!
Related
I'm trying to migrate from Delayed_Job to Sidekiq. and by running Sidekiq in kubernetes an unrecognised error comes to surface:
==================================================================
Please point Sidekiq to a Rails application or a Ruby file
to load your job classes with -r [DIR|FILE].
==================================================================
Kubernetes deployment snippet:
...
containers:
- name: sidekiq
image: {{ application_registry }}
imagePullPolicy: Always
command:
- bundle
args:
- exec
- sidekiq
- -r # not included in the original setting.
- /app/config/application.rb # not included in the original setting.
- "-C"
- "/app/config/sidekiq.yml"
resources:
...
PS: A lot of existing jobs still lay on DelayJob, some we plan to progressively migrate. so we include Sidekiq per job not globally:
class FirstJob < ApplicationJob
self.queue_adapter = :sidekiq
...
Following some guides described Here. I tried to require the config/application.rb under the -r flag, but nothing fixed.
Get rid of
- -r # not included in the original setting.
- /app/config/application.rb # not included in the original setting.
- "-C"
- "/app/config/sidekiq.yml"
You need to set the current working directory to /app.
Finally I got this done with mounting a different volume for sidekiq.yml config file.
...
command:
- bundle
args:
- exec
- sidekiq
- "-C"
- "/config/sidekiq.yml"
...
volumeMounts:
- name: sidekiq-yml
mountPath: /config
I'm trying to dockerize a perfectly fine running rake task that creates subscribers to a pub/sub endpoint. If I just use a bundle exec rake mytask:torun it works just fine.
But as soon as I run it in a docker container, it breaks. If I remove the EventMachine.run part, it does print the "Initialize task" and exits the container. If I put it back, it's just not doing anything.
Here's the task :
task torun: :environment do
puts "Initializing task"
EventMachine.run do
client = Restforce.new(obfuscated, not useful in context)
client.subscription "/topic/TestTopic", replay: -1 do |message|
puts "EM Received message: #{message.inspect}"
end
end
end
end
My docker-compose :
runevents:
# restart: unless-stopped
build: .
command: rails mytask:torun
env_file:
- "dev.env"
volumes:
- '.:/app'
Because I'm running on a mac and I know there are volume performance issues, I've also tried to remove the volumes: part in the docker-compose, but no luck.
Is there anything I'm missing ? Why does the container just not start ?
I'm dockerizing my existing Django application.
I have an entrypoint.sh script which run as entrypoint by the Dockerfile
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
It's content contains script to run migration when environment variable is set to migrate
#!/bin/sh
#set -e
# Run the command and exit with the custom message when the comamnd fails to run
safeRunCommand() {
cmnd="$*"
echo cmnd="$cmnd"
eval "$cmnd"
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error : [code: %d] when executing command: '$cmnd'\n" $ret_code
exit $ret_code
else
echo "Command run successfully: $cmnd"
fi
}
runDjangoMigrate() {
echo "Migrating database"
cmnd="python manage.py migrate --noinput"
safeRunCommand "$cmnd"
echo "Done: Migrating database"
}
# Run Django migrate command.
# The command is run only when environment variable `DJANGO_MANAGE_MIGRATE` is set to `on`.
if [ "x$DJANGO_MANAGE_MIGRATE" = 'xon' ] && [ ! "x$DEPLOYMENT_MODE" = 'xproduction' ]; then
runDjangoMigrate
fi
# Accept other commands
exec "$#"
Now, in the docker-compose file, I have the services like
version: '3.7'
services:
database:
image: mysql:5.7
container_name: 'qcg7_db_mysql'
restart: always
web:
build: .
command: ["./wait_for_it.sh", "database:3306", "--", "./docker_start.sh"]
volumes:
- ./src:/app
depends_on:
- database
environment:
DJANGO_MANAGE_MIGRATE: 'on'
But when I build the image using
docker-compose up --build
It fails to run the migration command from entrypoint script with error
(2002, "Can't connect to MySQL server on 'database' (115)")
This is due to the fact that the database server has not still started.
How can I make web service to wait untill the database service is completely started and is ready to accept connections?
Unfortunately, there is not a native way in Docker to wait for the database service to be ready before Django web app attempts to connect. Depends_on will only ensure that the web app is started after the database container is launched.
Because of this limitation you will need to solve this problem in how your container runs. The easiest solution is to modify the entrypoint.sh to sleep for 10-30 seconds so that your database has time to initialize before executing any additional commands. This official MySQL entrypoint.sh shows an example of how to block until the database is ready.
I'm so close yet so far from getting my Selenium tests working in my new Docker dev environment.
I recently did a big upgrade from ruby 2.4.2 to 2.6.3 . Around the same time I also switched from a local environment to a setup with Docker. Everything migrated fine except for this one last issue.
With browser tests, it appears that the browser can't see the changes to the database, for example, when I create a user, and then log in through the web UI on the browser, the returned webpage says "user and pass doesn't not exist". Also, the changes don't appear in the database even in the middle of the test, though I think that's normal when transactional_fixtures is enabled.
The problem goes away when config.use_transactional_fixtures = false. But then I have to deal with database cleaning, which I tried and was also problematic. Note that this all worked fine in ruby 2.4.2 on my local MacOS.
I can access the browser via VNC on port 5900 and see the tests running fine until it needs to do anything that requires data from the db, like logging in via the browser with a username that was created programmatically in the spec.
It's not clear to me where the database information goes if it's not in the database, or how the browser can access this data? This article seems to discuss the relevant issue of Capybara and the web server sharing the same db connection to access uncommitted db changes. But I'm still lost after several days 🤦♂️
Here are my configs.
Dockerfile
FROM ruby:2.6.3
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client pdftk xvfb
RUN wget https://chromedriver.storage.googleapis.com/2.41/chromedriver_linux64.zip
RUN unzip chromedriver_linux64.zip
RUN mv chromedriver /usr/bin/chromedriver
RUN chown root:root /usr/bin/chromedriver
RUN chmod +x /usr/bin/chromedriver
RUN echo "chromedriver -v"
RUN chromedriver -v
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
ENV BUNDLER_VERSION='2.0.2'
RUN gem install bundler --no-document -v '2.0.2'
RUN echo $BUNDLER_VERSION
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '3'
services:
db:
image: postgres:9.6.15
volumes:
- data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
ports:
- "5432:5432"
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && RAILS_ENV=test bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
links:
- selenium
redis:
image: redis:alpine
command: redis-server
ports:
- "6379:6379"
sidekiq:
depends_on:
- db
- redis
build: .
command: sidekiq -c 1 -v -q default -q mailers
volumes:
- '.:/myapp'
env_file:
- '.env'
selenium:
image: selenium/standalone-chrome-debug:3.0.1-germanium
# Debug version enables VNC ability
ports: ['4444:4444', '5900:5900']
# Bind selenium port & VNC port
logging:
driver: none
# Disable noisy logs.
volumes:
data:
spec/rails_helper.rb
# This file is copied to spec/ when you run 'rails generate rspec:install'
require 'spec_helper'
ENV['RAILS_ENV'] ||= 'test'
require File.expand_path('../../config/environment', __FILE__)
# Prevent database truncation if the environment is production
abort("The Rails environment is running in production mode!") if Rails.env.production?
require 'rspec/rails'
# Add additional requires below this line. Rails is not loaded until this point!
require 'support/factory_bot'
require 'support/session_helpers'
require 'support/record_helpers'
#both = ['artist','gallery']
# Requires supporting ruby files with custom matchers and macros, etc, in
# spec/support/ and its subdirectories. Files matching `spec/**/*_spec.rb` are
# run as spec files by default. This means that files in spec/support that end
# in _spec.rb will both be required and run as specs, causing the specs to be
# run twice. It is recommended that you do not name files matching this glob to
# end with _spec.rb. You can configure this pattern with the --pattern
# option on the command line or in ~/.rspec, .rspec or `.rspec-local`.
#
# The following line is provided for convenience purposes. It has the downside
# of increasing the boot-up time by auto-requiring all files in the support
# directory. Alternatively, in the individual `*_spec.rb` files, manually
# require only the support files necessary.
#
# Dir[Rails.root.join('spec/support/**/*.rb')].each { |f| require f }
# Checks for pending migrations and applies them before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
RSpec.configure do |config|
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = true
# RSpec Rails can automatically mix in different behaviours to your tests
# based on their file location, for example enabling you to call `get` and
# `post` in specs under `spec/controllers`.
#
# You can disable this behaviour by removing the line below, and instead
# explicitly tag your specs with their type, e.g.:
#
# RSpec.describe UsersController, :type => :controller do
# # ...
# end
#
# The different available types are documented in the features, such as in
# https://relishapp.com/rspec/rspec-rails/docs
config.infer_spec_type_from_file_location!
# Filter lines from Rails gems in backtraces.
config.filter_rails_from_backtrace!
# arbitrary gems may also be filtered via:
# config.filter_gems_from_backtrace("gem name")
config.include Features::SystemTestHelpers#, type: :system
end
# fixes a glitch in most recent chromedriver or chrome where it can't access remote URLs
# https://github.com/teamcapybara/capybara/issues/2181
require "selenium/webdriver"
Capybara.app_host = "http://web:3000"
Capybara.javascript_driver = :selenium_chrome_headless
Capybara.run_server = false
Capybara.register_driver :selenium_chrome_headless do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: { args: %w(no-default-browser start-maximized enable-features=NetworkService,NetworkServiceInProcess) }
)
Capybara::Selenium::Driver.new app, browser: :remote, url: "http://selenium:4444/wd/hub", desired_capabilities: capabilities
end
entrypoint.sh
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
bundle install
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
An excerpt of a spec that's breaking
spec/system/record_gallery_spec.rb
require 'rails_helper'
feature 'User creates a work' do
scenario 'with most fields filled out', driver: :selenium_chrome_headless do
gallery_sign_in
visit new_work_path
# breaks here ^^
...
end
end
the helper function referenced above
def gallery_sign_in(user: nil, id: nil, custom_pdf: nil)
user = create(:user, id: id, custom_pdf: custom_pdf) unless user
user.role = 'gallery'
user.save
visit login_path
fill_in 'user[email]', with: user.email
fill_in 'user[password]', with: user.password
click_button 'Login'
return user
end
Rails 5.1 does support sharing the DB connection between the Application under test and the tests (transactional testing) but only if you're letting Capybara start the instance of the application under test, since the tests and the app need to be running as separate threads under the same process. You're specifically telling Capybara not to to run the app under test (Capybara.run_server = false) and instead telling it to run against an app instance you're starting separately (Capybara.app_host = "http://web:3000"). In that configuration there is no way to share the DB connection between the tests and the AUT so you have to disable transactional testing ('config.use_transactional_fixtures = false') and use database_cleaner (or something else similar) to handle resetting the DB between each test.
I'm using Chef to manage deployments of a Rails application running with Mongrel cluster.
My init.d file is very simple. Here's the case for a restart:
restart)
sudo su -l myuser -c "cd /path/to/myapp/current && mongrel_rails cluster::restart"
;;
I can run service myapp restart as root with no issue. I can run mongrel_rails cluster::restart as myuser with no issue.
However, when I do a deployment through Chef, the tmp/pids/mongrel.port.pid files don't get cleaned up (causing all future restarts to fail).
Chef is simply doing the following to perform the restart:
service "myapp" do
action :restart
end
The init.d script is definitely being called as the logs all have the expected output (with the exception of exploding on the pid files, of course).
What am I missing?
As a work around, I can simply kill the mongrel processes before the init.d script is called. This allows the init.d script to still be used to start/stop the processes on the server directly, but handles the bogus case when mongrel is running and Chef tries to restart the service. Chef handles starting the service correctly as long as the .pid files don't already exist.
To do that, I included the following immediately before the service "myapp" do call:
ruby_block "stop mongrel" do
block do
ports = ["10031", "10032", "10033"].each do |port|
path = "/path/to/myapp/shared/pids/mongrel.#{port}.pid"
if File.exists?(path)
file = File.open(path, "r")
pid = file.read
file.close
system("kill #{pid}")
File.delete(path) if File.exists?(path)
end
end
end
end