I've been struggling with this a week now and really can't seem to find an answer. I've deployed my Rails App with Capistrano. I use Puma as a server.
When I deploy, everything works ok. The problem is to get Puma to start at reboot and/or when it crashes.
To get the deployment setup, I've used this tutorial. I'm also using RVM. The problem I seem to get is to get the service to start Puma. Here's what I've used (service file):
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
#User=my-user
WorkingDirectory=/home/my-user/apps/MyApp/current
ExecStart=/home/my-user/apps/MyApp/current/sbin/puma -C /home/my-user/apps/MyApp/shared/puma.rb
Restart=always
[Install]
WantedBy=multi-user.target
That doesn't work. I was starting to think the problem was Ruby not being installed for all users, so I've installed RVM for all users and still get the same problem. My server has only root and my-user.
Looking at how Capistrano deploys, the command it runs is: cd /home/my-user/apps/MyApp/current && ( RACK_ENV=production /home/my-user/.rvm/bin/rvm default do bundle exec puma -C /home/my-user/apps/MyApp/shared/puma.rb --daemon ). If I use the aforementioned command, I get an error from Systmd complaining about missing parameters. So I've written a script with it and got the service file to call this script to start the app.
That doesn't work either. Note that if I call the script from anywhere on the server the script does start the App, so its an issue on configuring Systemd, but I can't figure out what's wrong and I'm not sure how to debug it. I've seen the debug page on System's website, but it didn't help me. If I run systemctl status puma.service all it tells me is that the service is in failed state, but it doesn't tell me how or why.
Also worth noting: If I run bundle exec puma -C /home/my-user/apps/MyApp/shared/puma.rb from my App folder it works ok, so how I could duplicate this command with Systemd service?
At the end the problem was twofold: 1) rvm wasn't installed properly for all users, which meant the deployer user didn't have ruby/bundle/etc available and secondarily the script was also wrong. For reference below is the revised script that worked for me:
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=deployer
WorkingDirectory=/var/www/apps/MRCbe/current
ExecStart=/bin/bash -lc 'bundle exec puma -C /var/www/apps/MRCbe/shared/puma.rb'
Restart=always
[Install]
WantedBy=multi-user.target
Have you looked into Foreman ?
Foreman makes it easy to start and stop your application if it has multiple processes.
Incidentally it also provides an export function that can generate some systemd or upstart scripts for you to (re)start and stop your application.
As you are already using capistrano you can use capistrano-foreman to integrate all this nicely with capistrano.
I hope you find some use in these resources
Related
I'm using capistrano3-foreman gem to deploy my app into production which is in a centOS server but capistrano is trying to run foreman export command from root. Since I have installed rvm and other stuff from a user which has no password privilege in sudoers file, foreman export cannot be completed.
I'm getting the following error.
sh: /root/.rvm/bin/rvm: No such file or directory
How can I prevent capistrano-foreman from trying to run the command as root and make it set to my user home path.
Thanks in advance
Ok, since RHEL & CentOS 7 migrated to systemd, first mistake was trying to export foreman to upstart.
But When I exported foreman to systemd, systemd did not recognised foreman export scripts as a service so it didn't work either.
After many hours of work & research I decided to take my chance with supervisord on CentOS 7 and now It works like a charm.
http://supervisord.org/installing.html
And please note that Debian & Ubuntu are also getting rid of upstart...
I have an EC2 instance on which I installed rails server. The server also runs fine when I do
rails server
But after I close the ssh connection the server also stops. How can I keep the server running even after closing the ssh connection.
screen rails s
did the trick
after that CTRL + A + D and I left and the server is running fine
Try this. We have to start rails server as daemon.
rails s -d &
run at as server means thu nginx or apache or what ever this development server not mean run as server
user this is need more info https://www.digitalocean.com/community/articles/how-to-install-rails-and-nginx-with-passenger-on-ubuntu
also if want advance sololtion use rubber https://github.com/rubber/rubber
I needed mine running everything not, just rails in the background. Install Screen which makes a sub terminal that isn't affected by your ssh connection. sudo apt-get install screen Open screen screen Then start rails rails server &.
Press 'Crtl + A' then 'D' to escape and type screen -r to get back in to the screen terminal.
I will recommend using apache or something else instead of the regular rails server but you can probably add & at the end and feel free to leave
rails server &
These steps worked for me. MY OS is Description: Ubuntu 16.04.4 LTS
sudo apt-get install screen
screen rails s
CTRL + A + D from terminal to detached the existing process and let it run.
Here's a production proof version using RVM and Systemd. Will keep the server alive if it gets terminated for any reason.
[Unit]
Description=Puma Control
After=network.target
[Service]
Type=forking
User=user
WorkingDirectory=/var/www/your_project_name
PIDFile=/var/www/your_project_name/shared/tmp/pids/puma.pid
ExecStart=/home/user/.rvm/bin/rvm default do bundle exec puma -C /var/www/your_project_name/shared/puma.rb --daemon
ExecStop=/home/user/.rvm/bin/rvm default do bundle exec pumactl -S /var/www/your_project_name/shared/tmp/pids/puma.state -F /var/www/your_project_name/shared/puma.rb stop
Restart=always
# RestartSec=10
[Install]
WantedBy=default.target
i'm trying to get redmine running on cloudcontrol.com. i've got four questions:
i need to do more that start a webserver, for example i need to run rake tasks each time i deploy. can i put those in a one liner? i got the following in my Procfile for testing:
web: touch foobar; echo "barbarz"; bundle exec rails s -p $PORT -e production
but i neither see a file foobar nor do i get barbarz in the log files :(
When i login to the server and want to start the application it tells me tcp $PORT is already in use:
u24293#depvk7jw2mk-24293:~/www$ fuser $PORT/tcp # netstat and lsof is not available
24293/tcp: 10 13
u24293#depvk7jw2mk-24293:~/www$ ps axu | grep 13
u24293 13 0.0 0.0 52036 3268 ? SNs 15:22 0:00 sshd: u24293#pts/0
by sshd??? why would that be?
i need to change this default behaviour during push:
-----> Rails plugin injection
Injecting rails_log_stdout
Injecting rails3_serve_static_assets
or run something after it as easyredmine doesnt like plugins in vendor/plugins (or i cahnge the code of easyredmine quickly). how would i do that (not change the code, run an after hook for that like with capistrano or so)?
we have our own gitlab on a dedicated server and for bundle i need to pull those gems. how can i get the public key of the user running the app before the first deployment so i can add it to gitlab?
thanks in advance :)
The web command is only executed in the web containers. Using run bash connects you to a special ssh container of your app. See https://www.cloudcontrol.com/dev-center/Platform%20Documentation#secure-shell-ssh
Generally, you can not put multiple commands in one Procfile line. Wrap them in a sh -c '<cmd1>; <cmd2>' call or use a shell script explicitly.
Keep in mind that this script will be executed in each container being started. This includes the number of containers you deploy your app with and any redeploys that are triggered by the platform during operation (in case of a node failures, addon changes etc.).
In the ssh container the $PORT is used by the ssh server you are connected to.
If it is a problem of redmine during runtime, you could remove the plugins in the mentioned startup script. If it's a problem during the gem install currently you can not circumvent this behavior.
Dependencies requiring special ssh keys are not supported right now. If your server supports basic auth over https, you can use the https://<username>:<password>#hostname syntax
How to start thin Server automatically when Server Reboots.
I have a Rails 3 project which uses Thin Server.I can manually control the thin Server from terminal.Is it possible to start thin server as background process when system reboots.
Thanks in advance.
You can use Scheduled Tasks. There's a specific trigger option for starting a task when the computer starts.
To start the process in background mode you can use the -d option of the rails command.
I suppose you need to do this:
sudo thin install #to create init.d entry for thin
sudo /usr/sbin/update-rc.d -f thin defaults # to setup it
sudo thin config -C /etc/thin/<appname>.yml -c /var/rails/<appdir> --servers 4 -e production # to generate congig file for it. If you already got config file, you can just copy it to /etc/thin/ instead of creating.
If you use rvm on your server - browse this: RVM and thin, root vs. local user.
You can also take a look at: https://github.com/opscode-cookbooks/runit
I am working on a God script to monitor my Unicorns. I started with GitHub's examples script and have been modifying it to match my server configuration. Once God is running, commands such as god stop unicorn and god restart unicorn work just fine.
However, god start unicorn results in WARN: unicorn start command exited with non-zero code = 1. The weird part is that if I copy the start script directly from the config file, it starts right up like a brand new mustang.
This is my start command:
/usr/local/bin/unicorn_rails -c /home/my-linux-user/my-rails-app/config/unicorn.rb -E production -D
I have declared all paths as absolute in the config file. Any ideas what might be preventing this script from working?
I haven't used unicorn as an app server, but I've used god for monitoring before.
If I remember rightly when you start god and give your config file, it automatically starts whatever you've told it to watch. Unicorn is probably already running, which is why it's throwing the error.
Check this by running god status once you've started god. If that's not the case you can check on the command line what the comand's exit status is:
/usr/local/bin/unicorn_rails -c /home/my-linux-user/my-rails-app/config/unicorn.rb -E production -D;
echo $?;
that echo will print the exit status of the last command. If it's zero, the last command reported no errors. Try starting unicorn twice in a row, I expect the second time it'll return 1, because it's already running.
EDIT:
including the actual solution from comments, as this seems to be a popular response:
You can set an explicit user and group if your process requires to be run as a specific user.
God.watch do |w|
w.uid = 'root'
w.gid = 'root'
# remainder of config
end
My problem was that I never bundled as root. Here is what I did:
sudo bash
cd RAILS_ROOT
bundle
You get a warning telling you to never do this:
Don't run Bundler as root. Bundler can ask for sudo if it is needed,
and installing your bundle as root will break this application for all
non-root users on this machine.
But it was the only way I could get resque or unicorn to run with god. This was on an ec2 instance if that helps anyone.
Add the log option has helped me greatly in debugging.
God.watch do |w|
w.log = "#{RAILS_ROOT}/log/god.log"
# remainder of config
end
In the end, my bug turned out to be the start_script in God was executed in development environment. I fixed this by appending the RAILS_ENV to the start script.
start_script = "RAILS_ENV=#{ENV['RACK_ENV']} bundle exec sidekiq -P #{pid_file} -C #{config_file} -L #{log_file} -d"