I recently acquired an AWS subscription and I'm trying to set up a production environment to host my rails app. I'm using an EC2 instance with CentOS 7 and I'm using mariadb as sql server and thin as my rails server. Everything went fine until I stopped the EC2 instance yesterday. I did not change any configuration and yet I can't get thin to start. I've tried many solutions without success. I'm using a config file to execute thin. If I use the command thin start -e production, it starts successfully, but I've no luck using the config file. The pid and the socket files are not created. I think it is not a problem with my config file, because it worked fine yesterday. Can you give some hints about what the problem may be? Can it be something related to OS configurations, like permissions? Thanks in advance!
After all it was a permissions issue. I wasn't executing the command as root thus it would not run for lack of permissions. Sudo su and it worked like a charm. Thanks anyway!
Related
We're working with GitLab 8.16.4 and I want to upgrade it but since the backups are not compatible between versions I want to make sure everything is ok first.
I've been trying to follow the recovery process in a VM (VirtualBox) and in a few Dockers and is not giving any error but not working either (opened an issue there too )
I don't know what to check, what I'm doing wrong or if I need to do anything specific on the GitLab server (backup job> gitlab-rake gitlab:backup:create SKIP=db,uploads )
Any ideas?
Note regarding the VM: This was created in the past to have a backup of GitLab and it did do the restore months ago, not sure what's going on now
Edit days later: Installed the whole thing on a physical server with Ubuntu 16.04, still doesn't work. What am I doing wrong?
Managed to make it work!!
Looks like the problem was on the way I was making the backup.
After doing it with:
gitlab-rake gitlab:backup:create RAILS_ENV=production
The restore worked fine with:
gitlab-rake gitlab:backup:restore BACKUP=<filename>
I have hosted a rails application on AWS. Every time I want to access my website, I have to go through some steps which are quite repetitive.
1. ssh -i <<a>my-keypair-pem> ec2-user#<<a>AWS-IPv4-public-IP>
2. rails s -p <<a>port> -b 0.0.0.0
After some time, I also get this error
'packet_write_wait: Connection to <AWS-IPv4-public-IP> port 22: Broken pipe'
I did some research and can't seem to find a way to keep my application running 24/7 without having to do these steps before accessing every time.
My AWS instance is on 24/7, so the website should run 24/7 as well.
Would assign an elastic IP to my instance help?
Appreciate any guidance.
EDIT: I followed this tutorial initially https://www.youtube.com/watch?v=jFBbcleSPoY and that is where I found the steps mentioned above.
There are many ways to run the rails server as daemon. If you google for "rails server as daemon", you will see many links. Have not added any links as many of good links are by hosting service providers.
If you still want to run the rails server through the shell for some reason, tmux is the way to go. The following excerpt is shamelessly copied from Tmux Wiki.
tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal.
You can open a tmux terminal and start rails server. You can detach from the tmux and quit your ssh session. Whenever you wish, you can ssh back to your server and re-attach to the tmux session. You rails server will still be running as you left it. This is great way to run development server in foreground for debugging.
Resolved the issue with https://mosh.org/, for anyone who stumbles on this post in the future.
Download and install mosh (mobile shell)
Run the modified version of the command mentioned in my original question
mosh -ssh="ssh -i <your-keypair.pem>" ec2-user#<AWS-Instance-IP>
This resolved my packet_write_wait issues and I don't have to keep restarting the rails server.
The Integrity App is working fine for me in my OSX dev environment. I've deployed an instance to a Ubuntu server for my production setup, and I'm able to setup a new project. Once I call a manual build to attempt to test a first build the build record is created, but the build is never run.
I've added a bunch of logging to my application and have been able to track the point of failure to when the build job is added in ThreadPool#add It appears everything is running fine to get the job added to the build pool, but that the pool isn't actually running anything despite being spawned and no exceptions being raised.
The environment I'm running is Ubuntu 11.04, RVM & Ruby 1.9.2-p290, Passenger / Apache, and running Integrity from master w/Sqlite3 and ThreadedBuilder.
UPDATE:
I found an article indicating this may be an issue with using Apache & Passenger not loading the Ruby environment properly. This appears to be the case since in dev I'm just running bundle exec rackup, and in production I was trying to use Passenger. So on the production machine I started an instance of Integrity using bundle exec rackup, which does indeed actually start running the builds except that it didn't properly find the bundler gem as it should have. I'm sure I can track down a fix for that somehow.
So essentially the issue I am having is with running Integrity with Passenger rather than using rackup. The article that pointed me in this direction didn't work with their solution of getting Ruby in the Apache environment though. Can anyone help me determine how to properly run Integrity with Passenger?
The issue was in the way Passenger handles threading. By changing to the DelayedBuilder using DelayedJob for builds rather than the ThreadedBuilder I was able to use Passenger as the web server.
I am building an application that will only be run on a local network and am looking for the best way to restart my server from within the application itself. For the time being this is only running on Windows using WEBrick.
Look at Capistrano as others have suggested, it's fantastic :)
$ cap deploy
That's all you have to do. It'll grab the latest source from your git/SVN repo (lots more supported ofc), deploy, and restart your app server.
i'm trying to deploy in production Redmine application. I heard that thin is the fastest ruby on rails webserver so I installed it. Now I have a really simple problem: i must start it every time i reboot the machine via cmd because there isn't a prebuilt windows service or something similar that allow me to autostart it. How could i fix the problem? I saw that there is a bat file, so i tried to make a C# windows service like this and it starts correctly but if I stop it the service stops but the webserver is still active and it will never shutdown. The only way to stop thin is to reboot the machine. Maybe I'm wrong, could someone post an example of how should i run thin as a windows service?
i've written a blogpost about this a while ago, but most of it should still be applicable. Hope it helps.
But to be honest, i always deploy on windows using the mongrel-service gem, and configure an apache in front to load-balance between 3 mongrels. Much easier.
Also the big advantage for me was that if something went wrong with thin-service, it didn't restart automatically, while the mongrel-service guards your mongrel process, and if it for whatever reason goes down, it will restart it again. For me that was something i could not miss.