run multi delayed_job instances per RAILS_ENV - ruby-on-rails

I'm working on a Rails app with multi RAILS_Env
env_name1:
adapter: mysql
username: root
password:
host: localhost
database: db_name_1
env_name2:
adapter: mysql
username: root
password:
host: localhost
database: db_name_2
...
..
.
And i'm using delayed_job (2.0.5) plugin to manage asynchrone and background work.
I would like start multi delayed_job per RAILS_ENV:
RAILS_ENV=env_name1 script/delayed_job start
RAILS_ENV=env_name2 script/delayed_job start
..
I noticed that I can run only one delayed_job instance
for the 2nd, I have this error "ERROR: there is already one or more instance(s) of the program running"
My question : is't possible to run multi delayed_job instances per RAILS_ENV?
Thanks

You can have multiple instance of delayed job running as long as they have different process names. Like Slim mentioned in his comment, you can use the -i flag to add a unique numerical identifier to the process name. So the commands would look like:
RAILS_ENV=env_name1 script/delayed_job -i 1 start
RAILS_ENV=env_name2 script/delayed_job -i 2 start
This would create two seperate delayed job instances, naming them delayed_job.1 and delayed_job.2
A gotcha is that when you do this you also have to use the same flags when stopping them. Omitting the -i 1 or -i 2 when calling stop, won't stop them. As delayed job won't be able to find the correct corresponding process to stop.

Not sure if it'll solve your problem but... I often need to run multiple versions of script/server - and those don't play nice with each other either. The way to get them running is to use different ports. eg:
RAILS_ENV=env_name1 script/server -p 3000
RAILS_ENV=env_name2 script/server -p 3002
Perhaps this'll work for delayed_job too?
(though I'd avoid port 3000 as it's the std rails port) :)

Related

cloudcontrol.com tcp port in use & procfile multiple commands & push hooks

i'm trying to get redmine running on cloudcontrol.com. i've got four questions:
i need to do more that start a webserver, for example i need to run rake tasks each time i deploy. can i put those in a one liner? i got the following in my Procfile for testing:
web: touch foobar; echo "barbarz"; bundle exec rails s -p $PORT -e production
but i neither see a file foobar nor do i get barbarz in the log files :(
When i login to the server and want to start the application it tells me tcp $PORT is already in use:
u24293#depvk7jw2mk-24293:~/www$ fuser $PORT/tcp # netstat and lsof is not available
24293/tcp: 10 13
u24293#depvk7jw2mk-24293:~/www$ ps axu | grep 13
u24293 13 0.0 0.0 52036 3268 ? SNs 15:22 0:00 sshd: u24293#pts/0
by sshd??? why would that be?
i need to change this default behaviour during push:
-----> Rails plugin injection
Injecting rails_log_stdout
Injecting rails3_serve_static_assets
or run something after it as easyredmine doesnt like plugins in vendor/plugins (or i cahnge the code of easyredmine quickly). how would i do that (not change the code, run an after hook for that like with capistrano or so)?
we have our own gitlab on a dedicated server and for bundle i need to pull those gems. how can i get the public key of the user running the app before the first deployment so i can add it to gitlab?
thanks in advance :)
The web command is only executed in the web containers. Using run bash connects you to a special ssh container of your app. See https://www.cloudcontrol.com/dev-center/Platform%20Documentation#secure-shell-ssh
Generally, you can not put multiple commands in one Procfile line. Wrap them in a sh -c '<cmd1>; <cmd2>' call or use a shell script explicitly.
Keep in mind that this script will be executed in each container being started. This includes the number of containers you deploy your app with and any redeploys that are triggered by the platform during operation (in case of a node failures, addon changes etc.).
In the ssh container the $PORT is used by the ssh server you are connected to.
If it is a problem of redmine during runtime, you could remove the plugins in the mentioned startup script. If it's a problem during the gem install currently you can not circumvent this behavior.
Dependencies requiring special ssh keys are not supported right now. If your server supports basic auth over https, you can use the https://<username>:<password>#hostname syntax

Using foreman with more than one .env file

With foreman, I'm trying to start my rails app using two .env files.
One is the regular .env file, but then a second one has extra variables to load. I'm trying to use the method mentioned in this guide to keep config variables out of code: https://devcenter.heroku.com/articles/config-vars
Here is my Procfile:
web: bundle exec rails server thin -p $PORT --env $RACK_ENV,var.env
My problem is that foreman doesn't seem to want to take the two arguments for --env, even though the docs say that it should be possible:
-e, --env
Specify an alternate environment file. You can specify more than one file by using: --env file1,file2.
When I try to run it with "foreman start," I get the error:
06:22:46 web.1 | You did not specify how you would like Rails to report deprecation notices for your development,var.env environment, please set config.active_support.deprecation to :log, :notify or :stderr at config/environments/development,var.env.rb
It just doesn't seem to want to split "development,var.env" and process them separately. When I looked at the foreman code, it looks like it should just do a simple split on the comma, so I can't tell if I'm doing something wrong or it's a bug.
I didn't use that feature yet but from the docs I would say your are trying to specify the command line parameter in the wrong place. Instead of adding it to the process in the Procfile you should add it to the foreman command itself:
foreman start --env .env,.env.local
That way the environment is set for all processes started by foreman.

Rails 3 delayed_job start after rebooting

I have my rails site deployed under apache. The apache is run as a service. Now I have added delayed_job there and things work fine.
Now I want to start the workers together with apache, e.g, After rebooting the server, my site and workers are up and ready so I don't have to log in and type "sudo RAILS_ENV=production script/delayed_job -n 2 start".
Another issue is that whenever I want to start the delayed_job I have to use "sudo"...
Any idea how to avoid those 2 issues?
Thanks for your help.
Use the whenever gem and its 'every :reboot' functionality. In schedule.rb:
environment = ENV['RAILS_ENV'] || 'production'
every :reboot do
command "cd #{path} && #{environment_variable}=#{environment} bin/delayed_job --pool=queue1:2, --pool=queue2,queue3:1 restart"
end
Could you just create a shell script to execute the commands you need?
#!/bin/sh
# stop delayed job
# restart apache
apachectl restart
# start delayed job
sudo RAILS_ENV=production script/delayed_job -n 2 start
It sounds like you want to have delayed_job automatically start after apache starts when you boot up the hardware. If that's the case you need to write an init script in /etc/init.d or /etc/rc.d/init.d (depending on your system). This page gives a decent primer on this:
http://www.philchen.com/2007/06/04/quick-and-dirty-how-to-write-and-init-script

Rails + Postgres drop error: database is being accessed by other users

I have a rails application running over Postgres.
I have two servers: one for testing and the other for production.
Very often I need to clone the production DB on the test server.
The command I'm runnig via Vlad is:
rake RAILS_ENV='test_server' db:drop db:create
The problem I'm having is that I receive the following error:
ActiveRecord::StatementInvalid: PGError: ERROR: database <database_name> is being accessed by other users DROP DATABASE IF EXISTS <database_name>
This happens if someone has accessed the application via web recently (postgres keeps a "session" opened)
Is there any way that I can terminate the sessions on the postgres DB?
Thank you.
Edit
I can delete the database using phppgadmin's interface but not with the rake task.
How can I replicate phppgadmin's drop with a rake task?
If you kill the running postgresql connections for your application, you can then run db:drop just fine. So how to kill those connections? I use the following rake task:
# lib/tasks/kill_postgres_connections.rake
task :kill_postgres_connections => :environment do
db_name = "#{File.basename(Rails.root)}_#{Rails.env}"
sh = <<EOF
ps xa \
| grep postgres: \
| grep #{db_name} \
| grep -v grep \
| awk '{print $1}' \
| xargs kill
EOF
puts `#{sh}`
end
task "db:drop" => :kill_postgres_connections
Killing the connections out from under rails will sometimes cause it to barf the next time you try to load a page, but reloading it again re-establishes the connection.
Easier and more updated way is:
1. Use ps -ef | grep postgres to find the connection #
2. sudo kill -9 "# of the connection
Note: There may be identical PID. Killing one kills all.
Here's a quick way to kill all the connections to your postgres database.
sudo kill -9 `ps -u postgres -o pid`
Warning: this will kill any running processes that the postgres user has open, so make sure you want to do this first.
I use the following rake task to override the Rails drop_database method.
lib/database.rake
require 'active_record/connection_adapters/postgresql_adapter'
module ActiveRecord
module ConnectionAdapters
class PostgreSQLAdapter < AbstractAdapter
def drop_database(name)
raise "Nah, I won't drop the production database" if Rails.env.production?
execute <<-SQL
UPDATE pg_catalog.pg_database
SET datallowconn=false WHERE datname='#{name}'
SQL
execute <<-SQL
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = '#{name}';
SQL
execute "DROP DATABASE IF EXISTS #{quote_table_name(name)}"
end
end
end
end
When we used the "kill processes" method from above, the db:drop was failing (if :kill_postgres_connections was prerequisite). I believe it was because the connection which that rake command was using was being killed. Instead, we are using a sql command to drop the connection. This works as a prerequisite for db:drop, avoids the risk of killing processes via a rather complex command, and it should work on any OS (gentoo required different syntax for kill).
cmd = %(psql -c "SELECT pg_terminate_backend(procpid) FROM pg_stat_activity WHERE procpid <> pg_backend_pid();" -d '#{db_name}')
Here is a rake task that reads the database name from database.yml and runs an improved (IMHO) command. It also adds db:kill_postgres_connections as a prerequisite to db:drop. It includes a warning that yells after you upgrade rails, indicating that this patch may no longer be needed.
see: https://gist.github.com/4455341, references included
Step 1
Get a list of all postgres connections with ps -ef | grep postgres
The list will look like this:
502 560 553 0 Thu08am ?? 0:00.69 postgres: checkpointer process
502 565 553 0 Thu08am ?? 0:00.06 postgres: bgworker: logical replication launcher
502 45605 553 0 2:23am ?? 0:00.01 postgres: st myapp_development [local] idle
Step 2
Stop whatever connection you want with sudo kill -9 <pid>, where pid is the value in the second column. In my case, I wanted to stop the last row, with pid 45605, so I use:
sudo kill -9 45605
Please check if your rails console or server is running in another tab and then
stop the rails server and console.
then run
rake db:drop
Let your application close the connection when it's done. PostgreSQL doesn't keep connections open , it's the application keeping the connection.
I wrote a gem called pgreset that will automatically kill connections to the database in question when you run rake db:drop (or db:reset, etc). All you have to do is add it to your Gemfile and this issue should go away. At the time of this writing it works with Rails 4 and up and has been tested on Postgres 9.x. Source code is available on github for anyone interested.
gem 'pgreset'
Rails is likely connecting to the database to drop it but when you log in via phppgadmin it is logging in via the template1 or postgres database, thus you are not affected by it.
This worked for me (rails 6):
rake db:drop:_unsafe
I think we had something in our codebase that initiated a db connection before the rake task attempted to drop it.
After restarting the server or computer, please try again.
It could be the simple solution.
You can simply monkeypatch the ActiveRecord code that does the dropping.
For Rails 3.x:
# lib/tasks/databases.rake
def drop_database(config)
raise 'Only for Postgres...' unless config['adapter'] == 'postgresql'
Rake::Task['environment'].invoke
ActiveRecord::Base.connection.select_all "select pg_terminate_backend(pg_stat_activity.pid) from pg_stat_activity where datname='#{config['database']}' AND state='idle';"
ActiveRecord::Base.establish_connection config.merge('database' => 'postgres', 'schema_search_path' => 'public')
ActiveRecord::Base.connection.drop_database config['database']
end
For Rails 4.x:
# config/initializers/postgresql_database_tasks.rb
module ActiveRecord
module Tasks
class PostgreSQLDatabaseTasks
def drop
establish_master_connection
connection.select_all "select pg_terminate_backend(pg_stat_activity.pid) from pg_stat_activity where datname='#{configuration['database']}' AND state='idle';"
connection.drop_database configuration['database']
end
end
end
end
(from: http://www.krautcomputing.com/blog/2014/01/10/how-to-drop-your-postgres-database-with-rails-4/)
I had this same issue when working with a Rails 5.2 application and PostgreSQL database in production.
Here's how I solved it:
First, log out every connection to the database server on the PGAdmin Client if any.
Stop every session using the database from the terminal.
sudo kill -9 `ps -u postgres -o pid=`
Start the PostgreSQL server, since the kill operation above stopped the PostgreSQL server.
sudo systemctl start postgresql
Drop the database in the production environment appending the production arguments.
rails db:drop RAILS_ENV=production DISABLE_DATABASE_ENVIRONMENT_CHECK=1
That's all.
I hope this helps
[OSX][Development/Test] Sometimes it is hard to determine the proper PID when you have a lot of PostgreSQL processes like checkpointer, autovacuum launcher, etc. In this case, you can simply run:
brew services restart postgresql#12
If you dockerized your app, then restart your db service
sudo docker-compose restart db
Just make sure that the you have exited the rails console on any open terminal window and exited the rails server...this is one of the most common mistake made by people
I had a similar error saying 1 user was using the database, I realized it was ME! I shut down my rails server and then did the rake:drop command and it worked!
Solution
Bash script
ENV=development
# restart postgresql
brew services restart postgresql
# get name of the db from rails app
RAILS_CONSOLE_COMMAND="bundle exec rails c -e $ENV"
DB_NAME=$(echo 'ActiveRecord::Base.connection_config[:database]' | $RAILS_CONSOLE_COMMAND | tail -2 | tr -d '\"')
# delete all connections to $DB_NAME
for pid in $(ps -ef | grep $DB_NAME | awk {'print$2'})
do
kill -9 $pid
done
# drop db
DISABLE_DATABASE_ENVIRONMENT_CHECK=1 RAILS_ENV=$ENV bundle exec rails db:drop:_unsafe
ActiveRecord::StatementInvalid: PG::ObjectInUse: ERROR: database "database_enviroment" is being accessed by other users
DETAIL: There is 1 other session using the database.
For rails 7: you can use -> rails db:purge

Problems with starling/workling in production mode

I have a rails app that has asynchronous processing, and I'm having trouble getting it to work in production mode. I start starling from the root of the application like so:
starling -d -P tmp/pids/starling.pid -q log/
then I start workling like this
./script/workling_client start -t
the first time I ran this, it complained because there was no development database, so I created a development database, and that error went away when I restarted workling.
but when I try to actually run an asynchronous process, I get this message in log/production.log
Workling::QueueserverNotFoundError (config/workling.yml configured to connect to queue server on localhost:15151 for this environment. could not connect to queue server on this host:port. for starling users: pass starling the port with -p flag when starting it.
so, I run
sudo killall starling
then restart starling from the root of the application like this:
starling -d -P tmp/pids/starling.pid -q log/ -p 15151
which seems to work fine, but then when I try to start workling again with this script/workling_client start -t, I get this message in the console
/var/rails-apps/daisi/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:68:in `raise_unless_connected!': config/workling.yml configured to connect to queue server on localhost:22122 for this environment. could not connect to queue server on this host:port. for starling users: pass starling the port with -p flag when starting it. If you don't want to use Starling, then explicitly set Workling::Remote.dispatcher (see README for an example) (Workling::QueueserverNotFoundError)
So, I tried changing the config/workling.yml file inside the workling plugin to make both production and development listen on 15151, that didn't work, then I tried both of them on 22122, still no dice, so I tried a random port, but it still gives the exact same behaviour no matter what I put in the workling.yml file
the answer is that starling has to be started as such:
RAILS_ENV=production ./script/workling_client start -t

Resources