My problem is that we're receiving mysterious 504 errors in SolanoCI, and I'm trying to find a clean way to handle them only in the test environment without adding test-specific code to our ApplicationController.
I've tried stubbing a default handler via the test setup like so:
allow(ApplicationController).to receive(:timeout).and_return {
#do a bunch of stuff to try to pull out request information
}
but I'm finding that it's lacking some important information that's accessible, like the original request url and any stacktrace information.
Any suggestions are welcome.
I think the easiest way for you in this situation would be to ask Solano to run a debugging session on their box for you (they give you ssh access to their server, so you may run tests yourself).
This is a real time saver in a black box situations like this. You may run tests multiple times and use all sorts of debugging tools there. You may have to install them first, since Solano tries not to install extra tools from your Gemfile (such as pry) to minimize test's running time.
Just write them via a contact form and they will give you an access. Don't forget to stop that session after you've done with it, since the time it runs is deducted from worker hours in your plan.
Hope it helps.
EDIT: It seems Anonymous controller feature is perfect for your case:
Use the controller method to define an anonymous controller that will
inherit from the described class. This is useful for specifying
behavior like global error handling.
So, you can do something like this:
RSpec.describe ApplicationController, :type => :controller do
controller(ApplicationController) do
rescue_from Timeout::Error, :with => :log_timeout
def log_timeout(error)
# log error as you wish
end
end
describe "handling timeout exceptions" do
it "writes some logs" do
get :index
expect(response).to have_https_status(504)
end
end
end
Related
I was wondering if there is a script that can take existing codebase and generate unit tests for each method in controllers. By default all would be passing since they would be empty and i can remove tests i for methods i dont feel important.
This would save huge time and increase testing. Since i'd have to define only what each method should output and not boilerplate that needs to be written.
You really shouldn't be doing this. Creating pointless tests is technical debt that you don't want. Take some time, go through each controller and write a test (or preferably a few) for each method. You'll thank yourself in the long run.
You can then also use test coverage tools to see which bits still need testing.
You can use shared tests to avoid repetition. So for example with rspec, you could add the following to your spec_helper/rails_helper
def should_be_ok(action)
it "should respond with ok" do
get action.to_sym
expect(response).to be_success
end
end
Then in your controller_spec
describe UserController do
should_be_ok(:index)
should_be_ok(:new)
end
I have been trying to use the Rails profiling tools. I am using a very simple example taken from the docs at http://guides.rubyonrails.org/performance_testing.html that looks like this
require 'test_helper'
require 'rails/performance_test_help'
# Profiling results for each test method are written to tmp/performance.
class BrowsingTest < ActionDispatch::PerformanceTest
def test_homepage
get '/'
end
end
I then run the test using
rake test:profile
but it crashes with the following error
Error during failsafe response: undefined method `controller_name' for nil:NilClass
I suspect that the problem is that the app serves multiple domains and so simply using get '/' is not enough information to resolve the url back to a controller/action - it needs a host as well. However the usual ways of specifying a host (#host, #request.host, default_url_options[:host]) either don't work or cause another error (eg #request is nil).
I have also tried entering the full url. I have the different hosts defined as constants in the test environment so this looked something like
get "http://#{HOST_1}/"
In this case the rake task completed successfully but no profiling information appeared on the command line and no files were generated.
I haven't really used the profiling tools in Rails much so I am hoping I am missing something obvious. Any pointers would be much appreciated.
Cheers
I'd like to init the data base once everytime i run tests, rather than every test.
I know with Rspec there is before(:all), but I haven't been able to get that working. I was wondering if rails had something similar.
Firstly: there used to be a before(:all) equivalent in Test::Unit but it was removed (don't know why).
Secondly: there are very good reasons not to do what you are trying to do - tests are meant to be run independently of one another, not rely on state that's in the db. This way you can guarantee that it's testing exactly what you are expecting it to test.
If you have one test that changes the state of the db, and you move it and it runs after another test which expects it to be another state - you run into problems. Thus, all test must be independent.
Thus: the db is rolled back to its pristine state and re-seeded every time.
If you really want some state that the db is always in - then set it up in the fixtures... and just realise that the db will be re-loaded for each test.
If you are having trouble with load-times... then consider figuring out some other way around the problem - eg don't use huge numbers of fixtures, instead use Factories to only create the data that you need for each individual test.
If there's some other reason... let us know - we may have a solution for it.
Edit: if you really need it, I actually wrote a monkey patch for this a long while back:
"faking startup and shutdown"
All things to run before everything just go in the top of the class
require 'test_helper'
class ObjectTest < ActiveSupport::TestCase
call_rake("db:bootstrap RAILS_ENV=test")
#set up our user for doing all our tests (this person is very busy)
#user = Factory(:user)
#account = Factory(:account)
#user.account = #account
#user.save
# make sure our user and account got created
puts "||||||||||||||||||||||||||||||||||||||||||||||"
puts "| propsal_test.rb"
puts "| #{#user.name}"
puts "| #{#user.account.name}"
puts "||||||||||||||||||||||||||||||||||||||||||||||"
I have generated some scaffolding for my rails app.
I am running the generated tests and they are failing.
for example
test "should create area" do
assert_difference('Area.count') do
post :create, :area => { :name => 'area1' }
end
assert_redirected_to area_path(assigns(:area))
end
This test is failing saying that :
1) Failure:
test_should_create_area(AreasControllerTest)
[/test/functional/areas_controller_test.rb:16]:
"Area.count" didn't change by 1. <3>
expected but was <2>.
There is only one field in the model : name. I am populating this so it cant be because I am failing to populate the only field.
I can run the site and create an area with the name 'area1'. So reality is succeeding, but the test is failing.
I cant ask why its failing, because Im sure theres not enough information here for anyone here to know why. Im just stuck at knowing what avenues to go down to work out why the test is failing. Even putting puts into the code dont print out...
What steps can I take to track this down?
Per the request above, and matching what I was expecting that you'd find when you dug into your logs, you have an authorization that isn't being met in your test.
#request and #response are also useful objects to look at (i.e. puts #response inside your test). I don't know what authentication you are using, but check RAILS_ROOT/lib for authenticated_test_helper, or the /lib, or /test of your authentication gem. You'll find methods for performing a login.
OK, I am writing performance tests and am having trouble getting my session to persist like it does in integration tests. As I understand it, PerformanceTest is a child of IntegrationTest and any integration tests should work with performance test. However, when I take a integration test and copy it over to performance, change the ActionController::IntegrationTest to ActionController::PerformanceTest and then run the test, it fails.
I am using Authlogic and have not had a problem with the integration test sessions sticking around. With the performance tests though it looks like the session gets created properly but when I visit the "/reports" page (which is a protected page) it redirects me to the login page like there is no user session at all.
require 'performance_test_help'
class SimpleTest < ActionController::PerformanceTest
setup :activate_authlogic
test "login" do
assert user_session = UserSession.create!(User.find_by_login("admin"))
get "/reports"
assert_response :success
end
end
What's going on here? I've tried multiple ways to get a user session (create, post, etc.) and nothing seems to work. This is the first time I've written performance tests so I'm probably doing something stupid...
BTW: I am running Ruby 1.8.7, Rails 2.2.2 on Debian Squeeze.
You have to setup your performance tests like your integration tests.
Try to login using post:
post "user_session", :user_session => {:login => "user", :password => "password"}
not sure what is in your setup there, but you are missing require 'test_helper' as well. If this method is in there, or in an Authlogic test helper, you may have to make sure it's included.