When I deploy my application for the first time using cap [env] deploy, everything works as expected. The code is deployed and the puma server is started successfully with puma:start.
capistrano output of puma starting
* 2015-01-09 12:19:37 executing `puma:start'
* executing "cd /path/to/app/current && bundle exec puma -q -d -e production -C ./config/puma/production.rb"
servers: ["example.com"]
[example.com] executing command
** [out :: example.com] Puma starting in single mode...
** [out :: example.com] * Version 2.9.2 (ruby 2.1.5-p273), codename: Team High Five
** [out :: example.com] * Min threads: 0, max threads: 16
** [out :: example.com] * Environment: production
** [out :: example.com] * Listening on unix:///path/to/app/shared/sockets/puma.sock
** [out :: example.com] * Daemonizing...
If I make a code change and attempt to re-deploy, instead of calling puma:start, capistrano calls puma:restart, which acts as if successful, but puma wasn't actually restarted.
* 2015-01-09 12:27:56 executing `puma:restart'
* executing "cd /path/to/app/current && bundle exec pumactl -S /path/to/app/shared/sockets/puma.state restart"
servers: ["example.com"]
[example.com] executing command
** [out :: example.com] Command restart sent success
At this point, if I refresh the web page, I get a 504 Gateway Time-out error. It's very similar to this issue..
As the person suggested, if I added workers 1 to my puma configuration file, restart would be working, but start/stop would not.
At my current state (without workers), if I do cap [env] puma:stop, it does not stop puma. It also does NOT delete any of these files:
/path/to/app/shared/pids/puma.pid
/path/to/app/shared/sockets/puma.sock
/path/to/app/shared/sockets/puma.state
Important Note
In my Rails server, I am using ActionController::Live with Rails 4.1. I am connected to an event stream with Javascript / Redis. I noticed this in my nginx error log:
nginx error log
2015/01/09 12:29:32 [error] 8992#0: *371 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: [ip], server: example.com, request: "GET /build_configurations/refresh_workers HTTP/1.1",
upstream: "http://unix:///path/to/app/shared/sockets/puma.sock/build_configurations
/refresh_workers", host: "example.com", referrer: "https://example.com/"
All in all, how can I successfully use cap [env] deploy to deploy updates to my code while having puma start and restart okay?
Update
I found this issue that talks about restarting puma with ActionController::Live, but no solution seems present.
Using the information I found in this other Stack Overflow answer, I was able to implement a heartbeat that fixes this situation it seems. I haven't fully understood all of the mechanics myself, but I started with an initializer to start the heartbeat:
config/initializers/redis.rb
REDIS = Redis.new(url: "redist://localhost:6379")
heartbeat_thread = Thread.new do
while true
REDIS.publish('heartbeat','thump')
sleep 2.seconds
end
end
at_exit do
# not sure this is needed, but just in case
heartbeat_thread.kill
REDIS.quit
end
My controller has:
def build_status_events
response.headers["Content-Type"] = "text/event-stream"
redis = Redis.new(url: "redist://localhost:6379")
# blocks the current thread
redis.psubscribe(['my_event', 'heartbeat']) do |on|
on.pmessage do |pattern, event, data|
response.stream.write("event: #{event}\n")
if event == 'heartbeat'
response.stream.write("data: heartbeat\n\n")
else
response.stream.write("data: #{data.to_json}\n\n")
end
end
end
rescue IOError
logger.info 'Events stream closed'
ensure
logger.info 'Stopping Events streaming thread'
redis.quit
response.stream.close
end
I believe what happens is that the heartbeat gets posted, and if there's an error, the ensure block will be called and the subscription is closed. Then puma reboots as expected. If anyone has a better solution, or more information, please comment or add another answer.
Related
Gem which i have used in this application
gem 'redis', '~> 3.0'
gem 'sidekiq'
gem 'bunny'
This is the consumer part to get the message from queue as a rake task.
task :do_consumer => :environment do
connection = Bunny.new(ENV['CLOUDAMQP_URL'])
connection.start # start a communication session with the amqp server
channel = connection.channel()
queue = channel.queue('order-queue', durable: true)
puts ' [*] Waiting for messages. To exit press CTRL+C'
queue.subscribe(manual_ack: true, block: true) do |delivery_info,
properties, payload|
puts " [x] Received #{payload}"
puts " [x] Done"
channel.ack(delivery_info.delivery_tag, false)
# Call worker to do task
callSidekiqWorker.perform_async(payload)
end
end
Procfile
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -e production
worker: bundle exec rake do_consumer
I have added redistogo addon in heroku and i configured REDIS_PROVIDER as the ENV variable.
here the issue this is working fine in local, but after i push to heroku i am getting this error in logs:
[x] Received {"pos_items_cache_id":816,"process_state_id":320,"location_id":604}
14 Sep 2018 16:57:57.439106 <190>1 2018-09-14T11:27:56.885838+00:00 app worker.1 - - [x] Done
14 Sep 2018 16:57:57.509635 <190>1 2018-09-14T11:27:56.885844+00:00 app worker.1 - - E, [2018-09-14T11:27:56.885527 4] ERROR --
<Bunny::Session:0x29beb80 vmeksylf#chimpanzee.rmq.cloudamqp.com:5672, vhost=vmeksylf, addresses=[chimpanzee.rmq.cloudamqp.com:5672]>: Uncaught exception from consumer
<Bunny::Consumer:21834760 #channel_id=1 #queue=order-queue> #consumer_tag=bunny-1536910387000-84532836151>:
<Redis::CannotConnectError: Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)> # /app/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:345:in `rescue in establish_connection'
am i doing any wrong configuration in heroku? Also i think this rake task is not listening always.
I'm using a before_restart.rb hook in opsworks and I have a problem when it run "rake i18n:js:export". I don't know why is running sidekiq with this rake. it fails only in setup stage of opsworks. When I deploy it the error disappears.
[2015-01-09T18:52:17+00:00] INFO: deploy[/srv/www/XXX] queueing checkdeploy hook /srv/www/XXX/releases/20150109185157/deploy/before_restart.rb
[2015-01-09T18:52:17+00:00] INFO: Processing execute[rake i18n:js:export] action run (/srv/www/XXXX/releases/20150109185157/deploy/before_restart.rb line 3)
Error executing action `run` on resource 'execute[rake i18n:js:export]'
Mixlib::ShellOut::ShellCommandFailed
Expected process to exit with [0], but received '1'
---- Begin output of bundle exec rake i18n:js:export ----
STDOUT: 2015-01-09T18:52:30Z 1808 TID-92c6g INFO: Sidekiq client with redis options {}
STDERR: /home/deploy/.bundler/XXXX/ruby/2.1.0/gems/redis-3.1.0/lib/redis/client.rb:309:in `rescue in establish_connection': Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED) (Redis::CannotConnectError)
Sidekiq client (NOT sidekiq server) is running because it is defined in an initializer. When rake runs, it loads the entire rails app environment. So either allow for an environment variable to disable sidekiq client in config/initializers/sidekiq.rb or make sure redis-server is properly configured on the instance you're running this on.
unless ENV['DISABLE_SIDEKIQ']
# Sidekiq.configure...
end
DISABLE_SIDEKIQ=true bundle exec rake do:stuff
Here is the command I'm giving to deploy my code to a server.
$ cap production deploy:migrations
* executing `production'
triggering start callbacks for `deploy:migrations'
* executing `multistage:ensure'
* executing `deploy:migrations'
* executing `deploy:update_code'
triggering before callbacks for `deploy:update_code'
* executing `dj:stop'
* executing "RAILS_ENV=production god stop dj"
servers: ["xyz.com"]
connection failed for: xyz.com (ArgumentError: non-absolute home)
I am able to ssh into xyz.com. My capistrano version is
$ cap --version
Capistrano v2.5.19
And it is dependant on net-ssh-2.1.3. The ruby version is ruby 1.9.2p290
Read through similar questions in Stackoverflow, all seem to are suggesting a check on /etc/passwd file in the server. I checked the file and ENV['HOME'] is correctly set for the ssh user.
HOME variable in Server.
$ echo $HOME
/home/deploy
HOME has always been like this from the start. Why would it fail all of a sudden.
Anyone facing the same issue?
I am trying to deploy a rails application to the rackspace server via capistrano. I have deployed many Rails application to Rackspace and Linode server and never encountered such weird issue. The capistrano is not deploying the application and below is the log :
executing `deploy:assets:precompile'
* executing "cd /home/deployer/apps/latty39/releases/20121023165957 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile"
servers: ["50.56.183.16"]
[50.56.183.16] executing command
** [out :: 50.56.183.16] rake aborted!
** [out :: 50.56.183.16] cannot load such file -- Date
** [out :: 50.56.183.16]
** [out :: 50.56.183.16] (See full trace by running task with --trace)
command finished in 7454ms
*** [deploy:update_code] rolling back
* executing "rm -rf /home/deployer/apps/latty39/releases/20121023165957; true"
servers: ["50.56.183.16"]
[50.56.183.16] executing command
command finished in 2001ms
failed: "sh -c 'cd /home/deployer/apps/latty39/releases/20121023165957 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile'" on 50.56.183.16
I have no idea why this is failing. I have almost spent 3 hours on this thing and no success so far. Have search stackoverflow and other resources but no help.
Any help to resolve the issue will be highly appreciated
Thanks
I figured it out myself. I was having a custom rake task which was requiring ruby Date Class like:
require 'Date'
Removed it and all fixed. But I do not need to figure out why requiring the date class in production throws error.
When sshing into the webserver I can restart delayed_job all day without a problem. It brings down the existing worker, starts a new one and writes tmp/pids/delayed_job.pid with its process id. (I am also restarting passenger to mimic what I'm about to do with capistrano)
app#StagingServer:/app/current$ touch tmp/restart.txt; RAILS_ENV=staging script/delayed_job restart
delayed_job: trying to stop process with pid 22170...
delayed_job: process with pid 22170 successfully stopped.
delayed_job: process with pid 22284 started.
app#StagingServer:/app/current$ touch tmp/restart.txt; RAILS_ENV=staging script/delayed_job restart
delayed_job: trying to stop process with pid 22284...
delayed_job: process with pid 22284 successfully stopped.
delayed_job: process with pid 22355 started.
app#StagingServer:/app/current$ touch tmp/restart.txt; RAILS_ENV=staging script/delayed_job restart
delayed_job: trying to stop process with pid 22355...
delayed_job: process with pid 22355 successfully stopped.
delayed_job: process with pid 22427 started.
app#StagingServer:/app/current$
However, when I deploy using capistrano
dev#ubuntu:~/app-site$ cap passenger:restart
triggering start callbacks for `passenger:restart'
* executing `multistage:ensure'
*** Defaulting to `staging'
* executing `staging'
* executing `passenger:restart'
* executing "touch /app/current/tmp/restart.txt"
servers: ["staging.app.com"]
[staging.app.com] executing command
command finished in 242ms
* executing "cd /app/current;RAILS_ENV=staging script/delayed_job restart"
servers: ["staging.app.com"]
[staging.app.com] executing command
** [out :: staging.app.com] delayed_job: trying to stop process with pid 21646...
** [out :: staging.app.com] delayed_job: process with pid 21646 successfully stopped.
command finished in 11889ms
Seems fine? While the last line from delayed_job isn't printed (I think due to it not ending in a newline) this does successfully create a new process. However, it doesn't create the .pid file, so when I try and restart again:
dev#ubuntu:~/app-site$ cap passenger:restart
triggering start callbacks for `passenger:restart'
* executing `multistage:ensure'
*** Defaulting to `staging'
* executing `staging'
* executing `passenger:restart'
* executing "touch /app/current/tmp/restart.txt"
servers: ["staging.app.com"]
[staging.app.com] executing command
command finished in 398ms
* executing "cd /app/current;RAILS_ENV=staging script/delayed_job restart"
servers: ["staging.app.com"]
[staging.app.com] executing command
** [out :: staging.app.com] Warning: no instances running. Starting...
** [out :: staging.app.com] delayed_job: process with pid 21950 started.
command finished in 6758ms
It doesn't stop the existing process. Strangely, this time it will create a new process and it's .pid file.
This leaves me with 2 delayed_jobs processes running and only one in the .pid file. Every 2 cap deploys I do will add another delayed_job process. The previous processes are using the old version of the app, essentially breaking it.
config/deploy.rb:
namespace :passenger do
desc "Restart Application"
task :restart do
run "touch #{current_path}/tmp/restart.txt"
run "cd #{current_path};RAILS_ENV=#{deploy_env} script/delayed_job restart"
end
end
after :deploy, "passenger:restart"
Ubuntu, nginx, passenger
daemons (1.1.4)
delayed_job (2.1.4)
rails (3.0.9)
And locally
capistrano (2.9.0)
capistrano-ext (1.2.1)
Update:
Reading around, it seems it could be to do with a race condition inside daemons. However I'm a bit confused why it's showing only (and consistently) when using capistrano. I will try changing the command to a stop, sleep then start.
Solved by using stop and start instead of restart. Works due to a race condition probably caused by the daemons gem.
I would still like to know if anyone else has a better solution.
I had this problem and stopping and starting didn't work for me, so I came up with the following:
namespace :delayed_job do
desc 'Restart delayed_job worker'
task :restart do
on roles(:delayed_job) do
within release_path do
with rails_env: fetch(:rails_env) do
execute :pkill , "-f", "'delayed_job'"
execute :bundle, :exec, "bin/delayed_job", :start
end
end
end
end
end
...
after :restart, "delayed_job:restart"
Not a huge fan of the pkill, but this works consistently for me. Hope this helps
Capistrano shuffles a symlink for the current directory during each deployment which was orphaning the PID for me. We use the capistrano3-delayed-job gem which suggests applying one of these fixes to deploy.rb:
set :linked_dirs, %w(tmp/pids)
# or
set :delayed_job_pid_dir, '/tmp'
The first of which worked to resolve this issue for us. It moves the current/tmp/pids to shared/tmp/ and symlinks it in the current tree so that it persists across deployments.