Sidekiq deployment fails with "key not found: "MY_APP_DATABASE_PASSWORD" - ruby-on-rails

This is my rookie first question to the community.
Background:
I try to deploy Sidekiq on a my own Jessie Debian server for a Rails 5.0.6 app that works with Phusion Passenger with a user "deploy" . I have Redis 3.2.6 installed and tested ok. I've opted for a Systemd daemon to start Sidekiq as a system service.
Here is the configuration :
[Unit]
Description=sidekiq
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/var/www/my_app/code
ExecStart=/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
User=deploy
Group=deploy
UMask=0002
# if we crash, restart
RestartSec=4
#Restart=on-failure
Restart=always
# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
Here is sidekiq.yml
---
:verbose: true
:concurrency: 4
:pidfile: tmp/pids/sidekiq.pid
:queues:
- [critical, 2]
- default
- low
production:
:concurrency: 15
And finally #config/initializers/sidekiq.rb:
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
How it fails
I've been trying to solve the following error found in /var/log/syslog:
Dec 18 00:13:39 jjflo systemd[1]: Started sidekiq.
Dec 18 00:13:48 jjflo sidekiq[8159]: Cannot load `Rails.application.database_configuration`:
Dec 18 00:13:48 jjflo sidekiq[8159]: key not found: "MY_APP_DATABASE_PASSWORD"
which ends up in a sequence of sidekiq failure and a retry...
Yet another try
I have tried the following and this works :
cd /var/www/my_app/code
su - deploy
/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
Could someone help me connect the dots, please ?

Environment variable were obviously the problem. Since I was using
ExecStart=/bin/bash -lc 'bundle...
where -l was refering to a bash interactive session, I had to get into .bashrc of deploy user to move the export lines at the top of the file instead of the bottom where they used to be, or at least before this line of .bashrc :
case $- in
This post helped me a lot.

Related

Unable to run Puma as daemon OptionParser::AmbiguousOption: ambiguous option: -d

I upgraded to Puma 5.0.2 and started my rails app as usual with:
bundle exec puma -d -e production -b unix:///home/user/app/tmp/puma.sock
Now I get the error:
OptionParser::AmbiguousOption: ambiguous option: -d
What is the proper way to run puma as a daemon?
Context: Quick links:
Daemonize option has been removed without replacement as of Puma 5.0.0 (Source: https://github.com/puma/puma/blob/master/History.md)
You may refer this section for daemonization in their documentation: https://github.com/puma/puma/blob/master/docs/deployment.md#should-i-daemonize
Solution:
Create a systemd service for puma depending on your OS distro.
Configure your environment in config/puma in your app directory.
Add a service file named puma.service in /etc/systemd/system (the path works for me on SLES15).
Here is a sample that works for me (replace text within <> as per your needs):
[Unit]
Description=Puma HTTP Server
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
User=<UserForPuma>
WorkingDirectory=<YourAppDir>
Environment=RAILS_MASTER_KEY=<EncryptionKeyIfUsedByRailsApp>
ExecStart=/usr/bin/rails s puma -b 'ssl://127.0.0.1:3000?key=<path_to_privatekey.key>&cert=<path_to_certificate.crt>' -e production
Restart=always
RestartSec=2
KillMode=process
[Install]
WantedBy=multi-user.target
Save the above content as a file named puma.service in the directory path mentioned above.
After this just enable and start the service:
# systemctl daemon-reload
# systemctl --now enable puma.service
Created symlink /etc/systemd/system/multi-user.target.wants/puma.service → /etc/systemd/system/puma.service.
# systemctl status puma
● puma.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/puma.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2020-10-09 12:59:28 CEST; 7s ago
Main PID: 2854 (ruby.ruby2.5)
Tasks: 21
CGroup: /system.slice/puma.service
├─2854 puma 5.0.2 (ssl://127.0.0.1:3000?key=<your_key_path.key>&cert=<your_cert_path.crt>) [rails-app-dir]
├─2865 puma: cluster worker 0: 2854 [rails-app-dir]
└─2871 puma: cluster worker 1: 2854 [rails-app-dir]
Check puma status:
ps -ef | grep puma
This should now show running puma processes (main process and worker processes).
Here is a link for beginners on how to create a systemd service:
https://medium.com/#benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6
Systemd docs:
https://www.freedesktop.org/software/systemd/man/systemd.unit.html
https://www.freedesktop.org/software/systemd/man/systemd.service.html
Sorry but I am not a Windows person, but I believe that the idea is same. Anyone working in Windows may try creating a bat file and running it in the background as a Windows service.
Hope that helps.
This gem on Github looks like a good source to start from:
https://github.com/kigster/puma-daemon

Sidekiq consumes too much memory

I am using Sidekiq with God in my Rails app. I am using Passenger and Nginx.
I see many processes (30-50) running by sidekiq which consume about 1000MB of RAM.
Processes like:
sidekiq 3.4.1 my_app_name [0 of 1 busy] - about 30 processes.
ruby /home/myuser/.rvm/ruby-2.1.5/bin/sidekiq --environment ... - about 20 processes.
How to tell sidekiq to not run so many threads.
my config for sidekiq (config/sidekiq.yml):
---
:concurrency: 1
:queues:
- default
- mailer
and config for sidekiq for god:
num_workers = 1
num_workers.times do |num|
God.watch do |w|
...
w.start = "bundle exec sidekiq --environment #{rails_env} --config #{rails_root}/config/sidekiq.yml --daemon --logfile #{w.log}"
The problem is with "--daemon" (or "-d") parameter which runs it as a daemon. No need to run it as daemon. Just remove this parameter.

Sidekiq - Prevent worker from being executed in specific machine

I'm working on a Rails project that uses Sidekiq. Our Sidekiq implementation has two workers (WorkerA, that reads queue_a, and WorkerB, which reads queue_b). One of them has to be executed in the same server the Rails app is and the other one in a different server(s). How can I prevent WorkerB from being executed in the first server, and vice versa? Can a Sidekiq process be configured to run just specific workers?
EDIT:
The Redis server is in the same machine the Rails app is.
Use a hostname-specific queue. config/sidekiq.yml:
---
:verbose: false
:concurrency: 25
:queues:
- default
- <%= `hostname`.strip %>
In your worker:
class ImageUploadProcessor
include Sidekiq::Worker
sidekiq_options queue: `hostname`.strip
def perform(filename)
# process image
end
end
More detail on my blog:
http://www.mikeperham.com/2013/11/13/advanced-sidekiq-host-specific-queues/
well, here is the way to start sidekiq with options
nohup bundle exec sidekiq -q queue_a queue_b -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 &
you can start sidekiq with specific workers
you can run nohup bundle exec sidekiq -q queue_a -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 & to execute only WorkA
to distinguish different workers on different servers, just do like below:
system "nohup bundle exec sidekiq -q #{workers_string} -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 &"
def workers_string
if <on server A> # using ENV or serverip to distinguish
"queue_a"
elsif <on server B>
"queue_b queue_b ..."
end
end
#or you can set the workers_strings into config file on different servers

Sidekiq not running at startup of passenger server in Rails 4.1.6 app

I need Sidekiq to run once I start the server on our staging application. We moved to a different server instance on Rackspace to better mirror our production conditions.
The application is started with
passenger start --nginx-config-template nginx.conf.erb --address 127.0.0.1 -p 3002 --daemonize
The sidekiq files are as follows:
# /etc/init/sidekiq.conf - Sidekiq config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Sidekiq instances at once.
#
# Save this config as /etc/init/sidekiq.conf then mange sidekiq with:
# sudo start sidekiq index=0
# sudo stop sidekiq index=0
# sudo status sidekiq index=0
#
# or use the service command:
# sudo service sidekiq {start,stop,restart,status}
#
description "Sidekiq Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# change to match your deployment user
setuid root
setgid root
respawn
respawn limit 3 30
# TERM is sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM
instance $index
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv
exec /bin/bash <<EOT
# use syslog for logging
exec &> /dev/kmsg
# pull in system rbenv
export HOME=/root
source /etc/profile.d/rbenv.sh
cd /srv/monolith
exec bin/sidekiq -i ${index} -e staging
EOT
end script
and workers.conf
# /etc/init/workers.conf - manage a set of Sidekiqs
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See sidekiq.conf for how to manage a single Sidekiq instance.
#
# Use "stop workers" to stop all Sidekiq instances.
# Use "start workers" to start all instances.
# Use "restart workers" to restart all instances.
# Crazy, right?
#
description "manages the set of sidekiq processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Sidekiq processes you want
# to run on this machine
env NUM_WORKERS=2
pre-start script
for i in `seq 0 $((${NUM_WORKERS} - 1))`
do
start sidekiq index=$i
done
end script
When I go into the server and try service sidekiq start index=0 or service sidekiq status index=0, it can't find the service, but if I run bundle exec sidekiq -e staging, sidekiq starts up and runs through the job queue without problem. Unfortunately, as soon as I close the ssh session, sidekiq finds a way to kill itself.
How can I ensure sidekiq runs when I start the server and that it will restart itself if something goes wrong as mentioned in the use of upstart to run sidekiq?
Thanks.
In order to run Sidekiq as a service you should put script called "sidekiq" in
/etc/init.d
, not /etc/init

How to start thin process at system boot

I am using Debian flavor linux system. I am using thin web server to get live status of call in my application. This process gets started, when I use /etc/init.d/thin start. I used update-rc.d -f thin defaults to make thin process to be started at system boot. After adding the entry, I rebooted the system but thin process not getting started. I checked apache2 and it gets started properly at system boot. My thin script in init.d is as follows,
DAEMON=/usr/local/lib/ruby/gems/1.9.1/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
stop)
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
My configuration file in /etc/thin is as follows.
user_status.yml
---
chdir: /FMS/src/FMS-Frontend
environment: production
address: localhost
port: 5000
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
rackup: user_status.ru
threaded: true
daemonize: false
You need a wrapper for 'thin'.
See https://rvm.io/integration/init-d.
The wrapper path then needs substituting for DAEMON in the init.d script.
I keep forgetting this and it has cost a good few hours!
Now I've checked it out, as root, enter the two commands
rvm wrapper current bootup thin
which bootup_thin
The first creates the wrapper, and the second gives the path to it.
Edit the DAEMON line in /etc/init.d/thin to use this path, and finish off with
systemctl daemon-reload
service thin restart
I have assumed a multi-user installation of rvm, also you have to enter root
with
su -
to get the rvm environment right.

Resources