Speeding up notifications for autotest on Ubuntu - ruby-on-rails

I've figured out how to run guard on my ubuntu desktop to use while doing Ruby on Rails development. The notifications are quite useful, but lag forever behind guard, so if I run a test, I'll wait for two minutes afterward waiting for the notifications to catch up to the end.
It's not absolutely necessary, but rather annoying and would be even more useful if they could only display the final test results (for example, "20 examples, 2 failures") instead of the results of each individual test.
Is there any way to accomplish this?

I had the same issue. Here is the solution:
https://askubuntu.com/questions/128474/how-to-customize-on-screen-notifications
They say there is no official way to configure notification but there is an unofficial patch allowing to configure it widely. There you can reduce it's appearance time.
Also worth noticing, I had an issue appearing the same guard notifications several times. I found that it's somehow due to backup files with ~. Solved it adding
ignore /~$/
to the top of the Guardfile. I found the information about it here:
https://github.com/guard/listen/issues/153

Related

How to push to Heroku without putting the app down

Just wondering how everyone pushes updates to their production server on Heroku, without bringing the app down for a few couple of seconds?
Pushing to Heroku (especially using something like Unicorn), takes a while for the web app to load. Especially when there are end-users trying to access the site. They end up with 503 pages.It takes up to 30 secs to a minute for Unicorn processes to load.
There are TWO things you need to accomplish this, and it's not trivial.
1) Migrations need to be backwards compatible (i.e., run while the app is live). See this article about that: http://pedro.herokuapp.com/past/2011/7/13/rails_migrations_with_no_downtime/
2) Deploy using TWO heroku apps. I opened a ticket with Heroku on this topic and this was their reply:
We're currently working on a solution to provide a zero-downtime
deploy but do not have an ETA on when this might be available.
In the meantime a workaround that might be possible would be to deploy
to two separate apps. You can push new code to the second app, spin it
up, then move the domain names over to the second app. Wash and repeat
on the next deploy. This is less than ideal but might get you the
desired result in the interim.
If you do this, you will want to automate as much as possible since there's a lot of ways to mess that up. Here's an article on that topic: http://casperfabricius.com/site/2009/09/20/manage-and-rollback-heroku-deployments-capistrano-style/
Why?
These two solutions must both be done because the database migrations must work in BOTH (live and to-be-live) versions of the code. Once you have that working, THEN you can solve the second problem of having the application itself not seem like it went down. There is no supported way to spin up and spin down individual dynos once a push is started.
UPDATE:
There is a beta feature available with Heroku now. To use do the following prior to pushing:
heroku labs:enable -a APP_NAME preboot
This will change the behavior of the app during pushes. It will push up a parallel instance that will warm up for two minutes. Almost exactly two minutes after the push, all traffic will route to the new app. Be careful with migrations, as I mentioned above, as they are still an issue.
Heroku is currently testing their new preboot feature in beta. You might wanna check it out. Unfortunately it only works for ≥2 web dynos. And it also doesn't seem to work for heroku scale web=… which would be important to make it work with HireFireApp.com.

Is spork worth the hassle?

I have spent hours and hours trying to configure spork so that it works for RSpec, works for Cucumber, reloads models so that it doesn't have to be restarted all the time and doesn't throw errors.
I've spent so much time researching solutions to its quirks that I might as well just have waited for the regular tests to load. Added to all of that it has the annoying characteristic that when I'm debugging I type commands into the terminal window I called Rspec from but the output gets displayed in the terminal window Spork is running in. Eesh.
I'm hugely appreciative of any piece of software that is produced for the help of others and of the spork project but just can't figure out whether it's worth labouring through further.
EDIT
YES - SPORK IS DEFINITELY WORTH THE EFFORT. After 4 days of setup I finally managed to sort out all of the issues and it's speeded up my testing incredibly. I really thoroughly recommend it.
I found out that Spork seems to work mostly OK if you follow the TDD/BDD pattern - that is, you write your test first, let it fail, and only then write the code. However, I don't always work this way - there are many situations where I need to write the code before writing the tests.
Fortunately, I found a nearly ideal solution to my testing needs - the Spin gem. It doesn't force you into into any workflow, and just works.
Give my CoreApp ago - it's a complete config of RSpec/Spork/Guard/Cucumber.
I find it's worthwhile considering it speeds up mosts test but the disadvantage then is my tests aren't engineered to be 'efficient' themselves. Some believe it's better to wait for the environment to load each time, but on my MBP it takes over 10-15 secs for the env to reload.
https://github.com/bsodmike/CoreApp

Background processing in rails

This might seem like a FAQ on stackoverflow, but my requirements are a little different. While I have previously used BackgroundRB and DJ for running background processes in ruby, my requirement this time is to run some heavy analytics and mathematical computations on a huge set of data, and I need to do this only about the first 15 days of the month. Going by this, I am tempted to use cron and run a ruby script to accomplish this goal.
What I would like to know / understand is:
1 - Is using cron a good idea (cause I'm not a system admin, and so while I have basic idea of cron, I'm not overly confident of doing it perfectly)
2 - Can we somehow modify DJ to run only on the first 15 days of the month (with / without using cron), and then just stop and exit once all the jobs in the queue for the day are over (don't want it to ping the DB every time for a new job...whatever the jobs will be in the queue when DJ starts, that will be all).
I'm not sure if I have put the question in the right manner, but any help in this direction will be much appreciated.
Thanks
With cron's "minute hour day month dayofweek" time specification, 3:33am 1st through fifteenth of every month would be "33 3 1-15 * *"
Using cron would be really easy, and you have a lot of example and it is reliable.
Anyway here are few screen casts from Railcasts you may want to look at:
Starling and Workling
Custom Daemon
Yeah, why not? Go with cron. It's really well-tested in the wild, well suited for running periodical tasks and incredibly easy to use. You don't even need to learn the crontab syntax (although it's very easy) - just drop your script into /etc/cron.daily (this option might be available only on some Linux distros).
I'm not sure about the "only first fiteen days of the month" thing, but you can easily handle this condition inside your task, right?
EDIT:
Check out par's answer to see how to run the task only at a certain range of days.
I also had this requirement. I followed the "Automatic Periodic Tasks" recipe 75 in the Advanced Rails Recipes book. The recipe is written by David Bock. It has some code snippets and guidelines on how this can be achieved using cron and capistrano. However there is an unsolved (but mentioned) issue regarding users/permissions that has to be on the target machine. It is not really difficult to make it right, you just have to remember to do it and put it in you deployment capistrano scripts.
It seems that David Bock has continued to work on this and has now creaated a gem for use with cron: See his blog, and follow crondonkulous on github. Crondonkulous may very well take care of this user/permission thing and more, I haven't tried it.
Jarl

Automated testing with Ruby on Rails - best practices

Curious, what are you folks doing in as far as automating your unit tests with ruby on rails? Do you create a script that run a rake job in cron and have it mail you results? a pre-commit hook in git? just manual invokation? I understand tests completely, but wondering what are best practices to catch errors before they happen. Let's take for granted that the tests themselves are flawless and work as they should. What's the next step to make sure they reach you with the potentially detrimental results at the right time?
Not sure about what exactly do you want to hear, but there are couple of levels of automated codebase control:
While working on a feature, you can use something like autotest to get an instant feedback on what's working and what's not.
To make sure that your commits really don't break anything use continuous integration server like cruisecontrolrb or Integrity (you can bind these to post-commit hooks in your SCM system).
Use some kind of exception notification system to catch all the unexpected errors that might pop up in production.
To get some more general view of what happened (what was user doing when the exception occured) you can use something like Rackamole.
Hope that helps.
If you are developing with a team, the best practice is to set up a continuous integration server. To start, you can run this on any developers machine. But in general its nice to have a dedicated box so that its always up, is fast, and doesn't disturb a developer. You can usually start out with someone's old desktop, but at some point you may want it to be one of the faster machines so that you get immediate response from tests.
I've used cruise control, bamboo and teamcity and they all work fine. In general the less you pay, the more time you'll spend setting it up. I got lucky and did a full bamboo set up in less than an hour (once)-- expect to spend at least a couple hours the first time through.
Most of these tools will notify you in some way. The baseline is an email, but many offer IM, IRC, RSS, SMS (among others).

Rails app stuck

I am running a rails app on Dreamhost.
Today, a strange thing happened.
A page is almost loaded (it seems to be fully loaded but the status is not 'Done') and after that, the app didn't respond on any page.
I checked out the log and even the log was not complete.
How do I know it?
There are 3 missing images on the problem page and the log showed only 2 missing images and stopped there.
So I guess that something happened between the 2nd and the 3rd missing images.
I couldn't even start 'script/console production'.
After 14 minutes, it began to behave normally.
I asked the hosting company and they said that the process was killed due to over-use of memory.
Probably something was running heavily during the period.
The same thing happened one more time.
I had to kill the process to unlock the stucked app.
Passenger version is 2.2.4 and rails version is 2.3.2.
I am afraid that I can't give more specific info.
What do you guess cause such a problem?
Thanks.
Sam
As theIV stated, look at the last action called. Start this up locally and try to go through what was happening on the server to see if it's reproducable, or if you just get any sort of general hiccups. I've run Rails apps on Dreamhost for a while, and have not experienced this before, so I would guess that it's not Dreamhosts fault, but there is no 100% on that.
Good luck!
This sounds pretty app specific. I would start by looking at what action was last hit before the process started hoggin' and then work backwards from there to see if there are any calls that might be doing something you weren't expecting. Other than that, no clue. :(
Try using NewRelic RPM or TuneUp Lite to see what process is chunking most of your memory. You can run them locally but it would be better to test it on production.

Resources