Mac bash script to start multiple Passenger standalone instances? - ruby-on-rails

I have some Rails projects on Ruby 1.9.x and some still on 1.8.7. I'm using RVM, and I'm using Phusion's preferred method of defaulting to 1.9 for my main Passenger and using the 1.8.7 (REE)-based projects in standalone mode.
I didn't feel like setting up vhosts for these, so I just bookmarked my dev sites with the localhost and port.
So, to restart, I created this bash script (answering my own question here to help any others) ...

Quick and dirty shell script.
In ~/start_rails.sh:
#!/bin/sh
# Loop through directories of Passenger standalone sites
# and start, incrementing port each time
sites=( rails_site_1 rails_site_2 rails_site_3 )
port=3001
for dir in "${sites[#]}"
do
echo "Switching to ${dir}"
cd ~/Sites/$dir
echo "Starting Passenger on port ${port}"
passenger start -a 127.0.0.1 -p ${port} -d
echo ""
port=$((port+1))
done
Make sure the sites array appears in the order you bookmarked your ports. Call with start_rails.sh.

Related

Remote SDK starting rails server exits while ssh cli session starts

I'm working on a headless VM and configured my hosts Rubymine to use the remote SDK through the NAT interface of VirtualBox.
After configuring the remote SDK, my Run/Debug configuration looks fine. When I start the server, it returns the following output
/usr/bin/ruby -e '$stdout.sync=true;$stderr.sync=true;load($0=ARGV.shift)' /home/user/project/bin/rails server -b 0.0.0.0 -p 3000 -e development
git://github.com/rails/sprockets-rails.git (at master#06852de) is not yet checked out. Run `bundle install` first.
Process finished with exit code 11
When I fire up the same command, ssh'ed into the VM the server starts fine
The SDK seems to work; it invokes /usr/bin/ruby which is the proper path (no rvm)
The installed gem lib should be alright since everything works in ssh command line
The relevant lines in the Gemfile are
gem 'sprockets-rails', github: 'rails/sprockets-rails'
ruby '2.3.1'
gem 'rails', '~> 4.2'
The gems are installed in ~/.gem/ruby/2.3.0/bin/
The advice of the output to run bundle install is useless, since I did and works anyway in cli. It seems like Rubymine needs adjustments. Has anyone an idea what I could do about?
Found the answer!
Reason is the non-interactive shell session which sources less of your dotfiles (See man bash and search for non-interactive)
I knew it has nothing to do with Rubymine when I tried on my host system:
ssh user#127.0.0.1 "/usr/bin/ruby /home/user/project/bin/rails server -b 0.0.0.0 -p 3000 -e development"
which provides the same output as Rubymine did
also check ssh user#127.0.0.1 "env" to see variables of your shell environment.
man sshd 8 says how to set the environment, using the ~/.ssh/environment (file on the server, since man page is sshd) to store key/value pairs like KEY=value
This file is read into the environment at login (if it exists). It can
only contain empty lines, comment lines (that start with ‘#’), and
assignment lines of the form name=value. The file should be writable
only by the user; it need not be readable by anyone else. Environment
processing is disabled by default and is controlled via the
PermitUserEnvironment option.
Note the PermitUserEnvironment option, goes into your sshd.conf on the server

Capistrano foreman cannot export upstart scripts because it's trying to run commands as root

I'm using capistrano3-foreman gem to deploy my app into production which is in a centOS server but capistrano is trying to run foreman export command from root. Since I have installed rvm and other stuff from a user which has no password privilege in sudoers file, foreman export cannot be completed.
I'm getting the following error.
sh: /root/.rvm/bin/rvm: No such file or directory
How can I prevent capistrano-foreman from trying to run the command as root and make it set to my user home path.
Thanks in advance
Ok, since RHEL & CentOS 7 migrated to systemd, first mistake was trying to export foreman to upstart.
But When I exported foreman to systemd, systemd did not recognised foreman export scripts as a service so it didn't work either.
After many hours of work & research I decided to take my chance with supervisord on CentOS 7 and now It works like a charm.
http://supervisord.org/installing.html
And please note that Debian & Ubuntu are also getting rid of upstart...

EC2 : Rails deployment gives blank pages

I have deployed my rails application on EC2. It runs on two servers. One for rails application and second for DB.
When I start application using "rails s -e production&" and if I stay connected using SSH,
I can see the webpages.
As soon as I disconnect SSH I can not see the pages.
There are no errors thrown. One weird thing is "Production.log" file does not have anything.
everything is spit out on console.
You are running rails in the current ssh session. Any programs you have running during that session will stop if you disconnect. You need to set up your rails app to run as a daemon using something like Phusion Passenger.
You are basically running the built in WEBrick server that is not really meant for production so it's likely that the process is getting killed after the parent process (your ssh process) gets terminated.
You can probably tweak the configuration to make WEBrick not quit, or you can simply run your session using screen or tmux
Screen:
$ screen
$ rails s -e production &
$ screen -d
When you want to reattach:
$ screen -r
Tmux:
$ tmux
$ rails s -e production &
$ # Hit <ctrl-b><ctrl-d> to detach
When you want to reattach:
$ screen attach -t 0
Or like #datasage mentioned you can run your Rails with an actual production web server like Passenger Phusion or Unicorn.

How to keep server running on EC2 after ssh is terminated

I have an EC2 instance on which I installed rails server. The server also runs fine when I do
rails server
But after I close the ssh connection the server also stops. How can I keep the server running even after closing the ssh connection.
screen rails s
did the trick
after that CTRL + A + D and I left and the server is running fine
Try this. We have to start rails server as daemon.
rails s -d &
run at as server means thu nginx or apache or what ever this development server not mean run as server
user this is need more info https://www.digitalocean.com/community/articles/how-to-install-rails-and-nginx-with-passenger-on-ubuntu
also if want advance sololtion use rubber https://github.com/rubber/rubber
I needed mine running everything not, just rails in the background. Install Screen which makes a sub terminal that isn't affected by your ssh connection. sudo apt-get install screen Open screen screen Then start rails rails server &.
Press 'Crtl + A' then 'D' to escape and type screen -r to get back in to the screen terminal.
I will recommend using apache or something else instead of the regular rails server but you can probably add & at the end and feel free to leave
rails server &
These steps worked for me. MY OS is Description: Ubuntu 16.04.4 LTS
sudo apt-get install screen
screen rails s
CTRL + A + D from terminal to detached the existing process and let it run.
Here's a production proof version using RVM and Systemd. Will keep the server alive if it gets terminated for any reason.
[Unit]
Description=Puma Control
After=network.target
[Service]
Type=forking
User=user
WorkingDirectory=/var/www/your_project_name
PIDFile=/var/www/your_project_name/shared/tmp/pids/puma.pid
ExecStart=/home/user/.rvm/bin/rvm default do bundle exec puma -C /var/www/your_project_name/shared/puma.rb --daemon
ExecStop=/home/user/.rvm/bin/rvm default do bundle exec pumactl -S /var/www/your_project_name/shared/tmp/pids/puma.state -F /var/www/your_project_name/shared/puma.rb stop
Restart=always
# RestartSec=10
[Install]
WantedBy=default.target

phusion passenger not seeing environment variables?

We are running ubuntu servers with Nginx + Phusion Passenger for our rails 3.0x apps.
I have an environment variable set in /etc/environment on the test machines:
MC_TEST=true
If I run a console (bundle exec rails c) and output ENV["MC_TEST"] I see 'true'.
But, if I put that same code on a page ( <%= ENV["MC_TEST"] %> ) it doesn't see anything. That variable does not exist.
Which leads me to question:
1 - What is the proper way to get environment variables into passenger with nginx (not apache SetEnv)?
2 - Why does Passenger not have a proper environment?
Passenger fusion v4+ enables reading of environment variables directly from bashrc file. Make sure that bashrc lives in the home folder of the user under which passenger process is executed (in my case, it was ubuntu, for ec2 linux and nginx)
Here is the documentation which goes into details of bashrc
I've the same problem with you when use passenger with nginx and nginx init script on ubuntu.
The reason is that I use sudo service nginx restart(installed by init script) to start up nginx and
it was running by root and the root did not get your login user environment variable.
There are two solutions for this.
One is running nginx manually.
sudo service nginx stop
sudo -E /path/to/your/nginx
one is add env to your nginx init script
export MC_TEST=true
The latter solution is somehow ugly, But it works. And I think the better way is found a configuration to tell the init script to preserve the login user env.
I got another ugly solution.
env_file = '/etc/environment'
if File.exist?(env_file)
text = File.open(env_file).read
text.each_line do |line|
key, val = line.split('=', 2)
ENV[key] = val.strip
end
end
With nginx you can use the variable passenger_env_var to it. See an example below
passenger_env_var GEM_HOME /home/foo/.rbenv/rubygems;
passenger_env_var GEM_PATH /home/foo/.rbenv/rubygems/gems;
So for your case
passenger_env_var MC_TEST true;

Resources