Many problems with deploying a rails application to Elastic Beanstalk - ruby-on-rails

I'm at a breaking point with trying to get a rails application deployed to EB. I cannot user heroku for dependency reasons so I'm trying out AWS. The ruby/rails tutorials online all are very clear in setting up an environment, but I am met with many challenges, and at this point I'm starting to think it's because of Amazon's service and not my configuration.
Lets say I start with something very simple, I run
rails new
I start of with a barebones rails app, and I add the necessary routes and index.html.erb file to say hello word. I run
git init && git add . && git commit -m "hello world"
eb init
I run through the necessary steps, I've tried creating a 32 and 64 bit ubuntu instances with ruby 1.9.3 (Which is what my local environment is set up as), I dont set up RDS, and then run
eb start
which prompts me to deploy my latest git commit, I say yes, and it deploys!
Good news? Not so much, Yes my url given by EB does say Hello World, but if at ANY point I try to deploy new code, lets say a change to the gemfile, and there is an error in building my gemfile.. the environment completely blocks me out to the point where.
I cannot access any logs, and If I try, the env goes into a grey state, and reboots
I cannot redeploy any previous git commit, the env just spends 10 minutes and times out
I'm getting frustrated having to rebuild an ENTIRE environment every time there is a slight error in the code.
In general, I'm looking for an alternative to heroku, from which I can deploy changes from the command line. I don't think my question is phrased well enough for anyone to spot out exactly what I'm doing wrong - If I am even doing something wrong. If there are any best-practices with deploying to EB.. please let me know. Thanks!

What size instance are you using? I've found that trying to use a micro instance doesn't work as it runs out of memory when building any native extensions.
Try using a t2.small instance at a minimum.

Related

Digital Ocean clone droplet with Rails 5 app deployed with Capistrano

I lost the access to a droplet where I am running a rails 5 app that I've deployed with Capistrano. The stack of technologies I am using is
-Ruby 2.3.0 (RVM 2.9.1)
-Rails 5.0.1
-Puma
-Capistrano 3.7.2 (the first time I deployed the app I've used 3.6.0, but I was getting an error and I decided to upgrade it)
-Postgresql
- Nginx
I am able to take an snapshot and recreate the server and get the access again. However I am not able to make the app work again.
First things first, these are the steps I made
Take an snapshot of the server
Create a droplet based on the spanshot I did before
Setup access to the server (the user I used for deployment is there and I didn't need to do anything)
While I was trying to deploy Capristano I was getting an error that I didn't have access permissions to a folder o the folder didn't exists (the error was I haven't enough memory), but it turned out that I solved this adding swap memory.
Then I was getting an error that there was another puma.sock instance (or something like that), and I solved deleting the files from /apps/myapp/shared/tmp
Now it seems that when I try to deploy the app, The app does not have access to the database (the database is there with all the data)
Has anyone done something similar? is there a more magical/easy way?
Finally I was able to solve it. The problem was that I needed to add this line of config to my deploy.rb file
set :linked_dirs, %w{tmp/pids tmp/sockets log}
the lack of that line was causing that puma could not deploy. The message was something like this
Socket 'already in use'
is there a more magical/easy way?
I would suggest you to use heroku over digital ocean if you dont have alot of experience working with linux server configurations, its a lot more "magical and easy" and its free for basic stuff
if you choose to keep using digital ocean, i would make sure rails is working on the correct ip address, take a look at this guide: https://www.digitalocean.com/community/tutorials/deploying-a-rails-app-on-ubuntu-14-04-with-capistrano-nginx-and-puma

Difference between "Redmine on Heroku" and "Installing Redmine" documentation

My intention is to install Redmine on Heroku.
On redmine.org, there are two docs that I came across:
http://www.redmine.org/projects/redmine/wiki/RedmineInstall http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_on_Heroku.
I know the second doc is self-explanatory in its title but I want to know if I follow the first doc's instructions, would I be able to still deploy Redmine to Heroku. OR is it better that I follow the second doc's instructions?
A noob at this, any feedback would be appreciated. Thanks in advance.
==============
Following the Heroku specific instructions (step 5), I tried to run rake generate_secret_token using Ruby's CMD however I get back 'Please configure your config/database.yml first'. There are two related files in two different locations. C:\Users\\redmine\config AND from C:\Users\\Desktop\\redmine-2.5.1\config. Which database.yml do I use? The config file on the second path has only database.yml.example. Do I make the change there and save it as 'database.yml'? Or do I make the change in the first location, cd to the first location and run rake generate_secret_token?
Follow the Heroku-specific instructions. The first document explains how to install Redmine on a server that you have full (root) control over. It entails logging into the machine and running various commands to install software.
Heroku does not give you a full server instance in the way the first document requires. Instead, you work on your application on your own machine and push it from there to Heroku. Things like database setup are configured through Heroku addons. You do not get access to the filesystem, which is in fact read-only.
The first document wouldn’t work for installing on Heroku.

Not able to push my code into AWS EB

I have been facing issue with pushing my ruby on rails code into AWS ElasticBeansTalk server. I first time was able to initialize the EB, commit and push the code and tried to run the EB server. Everything was fine, but after a few commits, suddenly it is raising the exception as following.
remote: error: Unable to create application version: You cannot have more than 500 Application Versions. Either remove some Application Versions or request a limit increase.
I am not able to find what to do with that. Can anybody help me out to achieve the solution clearly please?
Thanks in advance.
The error code suggests you've pushed a very large number of builds onto the Elastic Beanstalk environment. Try going in to your AWS Console, and go to Elastic Beanstalk, and from the Actions button for your application, select View Application Versions.
Most probably, you'll find 500 different versions of your application here. Select as many old ones as you wish, and do Delete to remove these. Then you should be able to continue.
(Of course, if my hunch is correct, a more interesting question is how on earth you've managed to upload 500 different versions of your application. I'm not running Ruby on Rails, so I'm not too familiar with that environment...)
Good luck!
Use eb labs cleanup-versions --num-to-leave=some_value to leave "some_value" number of last application versions and eb labs cleanup-versions --help to get a full list of available commands. Notice, that eb labs is experimental branch and its syntax may differ based on eb cli version.
After deleting the older application versions, git aws.push started to work again. It would be nice if the version limit error was returned by git aws.push as it would have saved a lot of time.
So follow below steps to resolve this isue.
logged in to console
Go to Elastic Beanstalk and select you application and environment
Find button "Upload and deploy" below text 'running version' and click it
To deploy a previous version, go to the "Application Versions page".
Select your last commit and delete this version label.
6 Again try to deploy with the new version.

Capistrano 3 not updating the releases

I am using Capistrano 3 to deploy my app to the production server.
My server has system wide install of rvm. There is nothing extra ordinary about the deploy script.
However when i run cap production deploy The deploy script gives out successful messages and seems that deploy went without a problem.
However when I check the latest release folder is not updated and only the repo folder is updated.
This was supposed to be much easier while using Capistrano 2. But the respective commands to create symlinks etc all are shown to be passed in the console log while depoying while in the server nothing is being done.
Am I missing something about the capistrano 3 changes.
Ask if you need more information.
Capistrano 3 changed the symlink task, if you overrode it or called it specifically like deploy:create_symlink, you may want to audit your code.

Want to develop rails site offline then move to server

Is there an issue with developing my site on my macbook and then moving to a server when done? Will there be issues I need to plan ahead for? DB or ruby related maybe? Dependencies or something a server could have different from my dev environment that could cause a nightmare later? I'd rather develop it offline since it'd be faster and wouldn't require an internet connection but in the past I've always done everything with live sites so this would be a first, and I am new to ruby on rails.
Developing locally and then deploying to your server(s) via something like capistrano is standard practise.
It's a good idea to keep your development environment as close as possible to your production environment (ruby versions, database versions etc). Bundler makes keeping your gems in sync easy
I used Heroku for some projects. The deployment was as easy as it could be. I just did a git push and it worked without problems... I really like bundler and rake :-)
Your Question embodies THE way to develop in Rails. Your development environment is an offline representation of what you're production site will be.
A quick workflow analysis for you could be:
rails new ~/my_app -d postgresql; cd ~/my_app; rm public/index.html
Next, create the database:
bundle exec rake db:create:all
Now you'll have the db and app all set up, let's set up your main pages:
bundle exec rails generate controller Site index about_us contact_us
Now you'll have something to see on the site, so run:
bundle exec rails server
This server acts as your offline connection and will handle the rendering of any text, images, html etc you want to serve in your rails app. Now you can join in the debates of TDD, to TATFT or JITT, rspec vs test::unit. Welcome.
Developing locally is definitely the way to go. However, I would look into getting it on production as soon as possible and pushing often. This way you can see changes happen as you make them and are aware of any possible breaking changes.
I use heroku a lot and when I start a new project I push it to heroku almost immediately. While developing, I can publish new changes simply by git push heroku master. Everyone has to find their own workflow, but this has always worked well for me.
If you are interested in Heroku here is a good link to get you started:
https://devcenter.heroku.com/articles/rails3

Resources