I need to run my Rails server upon system startup. I am running my ROR code in my Ubuntu server. Here i need to see my rails server always in start condition. Suppose the Ubuntu server has shut down for some problem when it will start again my Rails server should start automatically. For this i made a script which is given below:
cd /home/subhrajyoti/spa
rails server -p 8888 -b 10.25.25.100 -d
In windows I am putting this file in the Windows Startup folder and it's running automatically each time Windows starts. Now I need the same behaviour in Ubuntu.
Check this out might help with your issue
Link
Related
I use the following Dockerfile to create an image for my Rails 4.2 webapp:
FROM ruby:2.3.4
ENV LANG C.UTF-8
WORKDIR /usr/src/page
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install
COPY . .
EXPOSE 3000
ENV CERT_PATH ssl://0.0.0.0:3000?key=certificate.key&cert=certificate.crt
CMD rails server -b $CERT_PATH -e production
The Rails app uses the puma (version 3.9.1) webserver.
I create a docker image an run this image as a container on a webserver, which works fine. I can access the webapp via a domain test.example.com or direct via the IP of the server. I use https to access the webapp.
Now the problem:
After visiting a few sites the webapp stops responding (timeout error in browser). I think this happens only when visiting the site via the IP address (https://255.255.255.255/login etc.). However accessing via test.example.com does not work after that.
I have to add the SSL cert is self created.
In the rails logs in the container I see that the last request (which did not work) wasn't even logged, so my guess is it could be a docker problem?
However sudo journalctl -fu docker.service does not seem to show errors.
So my questions: Are there errors in my Dockerfile, are there some known bugs in one of the used software, does anybody know what my problem is, what are some docker commands to find error logs?
I did the following to solve the problem:
1) Used newest ruby version
2) Used newest puma version
3) Used puma workers
4) Made a config file for puma
For the time being, it seems these things solve the problem.
It appears that docker never made any problems.
We are starting integrate our build and automation testing process to jenkins pipeline, and I have some issue with starting rails server.
First of all, this is our pipeline chart:
In any step of "Config" (0,1,2), I start different rails app with specific port, using: rails s -p XXXX -d, and just after the execution command, I run lsof -i:XXXX and I DO see the server running.
But, in QA stage, I want to use the servers I ran in Servers Configuration stage, but I get connection refused in our tests, and also, when accessing the server the apps ran on, I don't see them running anymore, even that I used -d to daemonize them.
Any ideas? it seems like the rails servers ran only for the servers configuration stage and then closed, is that possible? and if so, how do I handle them?
Thanks!
I have hosted a rails application on AWS. Every time I want to access my website, I have to go through some steps which are quite repetitive.
1. ssh -i <<a>my-keypair-pem> ec2-user#<<a>AWS-IPv4-public-IP>
2. rails s -p <<a>port> -b 0.0.0.0
After some time, I also get this error
'packet_write_wait: Connection to <AWS-IPv4-public-IP> port 22: Broken pipe'
I did some research and can't seem to find a way to keep my application running 24/7 without having to do these steps before accessing every time.
My AWS instance is on 24/7, so the website should run 24/7 as well.
Would assign an elastic IP to my instance help?
Appreciate any guidance.
EDIT: I followed this tutorial initially https://www.youtube.com/watch?v=jFBbcleSPoY and that is where I found the steps mentioned above.
There are many ways to run the rails server as daemon. If you google for "rails server as daemon", you will see many links. Have not added any links as many of good links are by hosting service providers.
If you still want to run the rails server through the shell for some reason, tmux is the way to go. The following excerpt is shamelessly copied from Tmux Wiki.
tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal.
You can open a tmux terminal and start rails server. You can detach from the tmux and quit your ssh session. Whenever you wish, you can ssh back to your server and re-attach to the tmux session. You rails server will still be running as you left it. This is great way to run development server in foreground for debugging.
Resolved the issue with https://mosh.org/, for anyone who stumbles on this post in the future.
Download and install mosh (mobile shell)
Run the modified version of the command mentioned in my original question
mosh -ssh="ssh -i <your-keypair.pem>" ec2-user#<AWS-Instance-IP>
This resolved my packet_write_wait issues and I don't have to keep restarting the rails server.
I recently acquired an AWS subscription and I'm trying to set up a production environment to host my rails app. I'm using an EC2 instance with CentOS 7 and I'm using mariadb as sql server and thin as my rails server. Everything went fine until I stopped the EC2 instance yesterday. I did not change any configuration and yet I can't get thin to start. I've tried many solutions without success. I'm using a config file to execute thin. If I use the command thin start -e production, it starts successfully, but I've no luck using the config file. The pid and the socket files are not created. I think it is not a problem with my config file, because it worked fine yesterday. Can you give some hints about what the problem may be? Can it be something related to OS configurations, like permissions? Thanks in advance!
After all it was a permissions issue. I wasn't executing the command as root thus it would not run for lack of permissions. Sudo su and it worked like a charm. Thanks anyway!
I configured my Rails 3 production app about 6 months ago on Ubuntu running nginx/passenger, using git and Capistrano for deployment.
Fast forward to last week - The data center I was using (DigitalOcean NYC) actually had a complete power failure (and the battery backup didn't work) - resulting in my server shutting completely down.
I did not set passenger or mysql to start on reboot, so when the hardware server restarted, my app was still down.
I really did not know much about what I was doing at the time when I launched it (since it was my first production server that I have worked with), and I followed a guide to get it up and running.
When I attempted to get the app running again, I managed to start mysqld no problem - but for the life of me couldn't remember how to get nginx/passenger running again.
Since time was of the essence (my client needed the app up and running ASAP), I ended up getting the app back up and running by navigating to my app directory (/current) and using the command:
passenger start -p 80 -e production
This did the trick but actually started Passenger Standalone. It seems to work fine (it is not a big or complicated app at all, maybe a few users at a time). I can navigate back to my directory and start and stop it using the above command (and passenger stop -p 80).
However, now my capistrano deploy (cap deploy) no longer restarts the server on a deploy (it is trying to run touch tmp/restart.txt) - which even if I try to run manually, does nothing since the server is running Passenger Standalone.
I can't remember how I got the server up and running in the first place because it was so long ago. I'm not using RVM - just the version of Ruby running directly on the server.
Does anyone know the correct command to start nginx/passenger (not standalone) on Ubuntu?
And even a step further - how I can get mysqld and nginx/passenger to automatically load on a hard server restart?
Capistrano does not restart the server because it actually creates a new app directory (/u/apps/.../releases/xxx), while Passenger Standalone is still running in the old app directory (/u/apps/.../releases/yyy). Therefore touching restart.txt doesn't work. Instead, you have to restart Passenger Standalone like this:
cd /path-to-previous-release && passenger stop -p 80
cd /path-to-current-release && passenger start -p 80 -e production
You mentioned you want to start nginx/passsenger. I assume that you mean the Nginx mode. Here's what you need to do:
Install Phusion Passenger using the official Passenger APT repository.
There is no step 2. If you did step 1, then the Ubuntu package will automatically configure Nginx to start at system boot, which will automatically start Passenger as well.
I don't understand why you ask how you can get mysqld to automatically start on a hard server restart. Mysqld is always started during system boot. You don't have to do anything.