STS IDE underline domain fields - grails

I have the following situation:
I have a domain and I had to remove a field from it, but this field is referenced in several different locations and grails does not capture this error during compilation.
The STS IDE does underscores these fields, but it would be totally impractical to sweep the entire application looking for these flagged errors.
Is there another way I can catch these errors?

Good test coverage is going to be your best way of making sure you've captured everything when you make changes like this.
Make sure you're utilizing your IDE's refactoring functionality. It won't always catch everything, but will help. Also, read #1.
Do a search in STS for this field and clean up. This is better than manual. Also, read #1.
Read #1

Related

Load testing on ASP.NET web application using CapCal

I'm using CapCal to perform load testing on a asp.net WebForms web application.
When a new build is uploaded to the test environment we need to record (i'm using fiddler to record the tests) a new set of tests because otherwise VIEWSTATE errors are thrown.
The builds are not very different, the same tests are done on each, we want to see if we have performance improvements from one build to the next. We would like to use the same tests to asses performance in the same conditions on every build, plus the recording process is very time consuming.
Is there a way in CapCal to set the VIEWSTATE as a variable (extract the viewstate from the page source, assign the extracted value to a variable) instead of a hard coded value?
Unrelated problem: When a new set of tests is uploaded the "+" sign from the url is replaced with " " (white space)
i.e. "/index.aspx?WebSiteRedirect=true&host=DateTime=2013-01-15+05%3a43%3a01" becomes "/index.aspx?WebSiteRedirect=true&host=DateTime=2013-01-15 05%3a43%3a01". Is there an option in CapCal to avoid this problem?
Is there a way in CapCal to set the VIEWSTATE as a variable (extract the viewstate from the page source, assign the extracted value to a variable) instead of a hard coded value?
What you're looking for is commonly referred to as automatic test configuration or automatic variable correlation. I'm not familiar with CapCal, perhaps searching the help for "correlation" or "dynamic" will help. If CapCal can't do that for you, then you may want to look for a tool that can. Manual configuration of fields like __VIEWSTATE can be very time-consuming. Many .NET apps have other fields that need this treatment as well - I don't recall them at the moment, but __EVENTARGUMENT, __EVENTTARGET and __EVENTVALIDATION come to mind.
The other issue is related to turning on the correct "URL Encoding" scheme, but I don't know how to do that in CapCal :(
Well i can help you with the second issue:
Replace + with %2b and that will work.
As far as the viewstate correlation is concerned, I'm still looking into it.
I'll keep you posted if you are still interested.

Find unused code in a Rails app

How do I find what code is and isn't being run in production ?
The app is well-tested, but there's a lot of tests that test unused code. Hence they get coverage when running tests... I'd like to refactor and clean up this mess, it keeps wasting my time.
I have a lot of background jobs, this is why I'd like the production env to guide me. Running at heroku I can spin up dynos to compensate any performance impacts from the profiler.
Related question How can I find unused methods in a Ruby app? not helpful.
Bonus: metrics to show how often a line of code is run. Don't know why I want it, but I do! :)
Under normal circumstances the approach would be to use your test data for code coverage, but as you say you have parts of your code that are tested but are not used on the production app, you could do something slightly different.
Just for clarity first: Don't trust automatic tools. They will only show you results for things you actively test, nothing more.
With the disclaimer behind us, I propose you use a code coverage tool (like rcov or simplecov for Ruby 1.9) on your production app and measure the code paths that are actually used by your users. While these tools were originally designed for measuring test coverage, you could also use them for production coverage
Under the assumption that during the test time-frame all relevant code paths are visited, you can remove the rest. Unfortunately, this assumption will most probably not fully hold. So you will still have to apply your knowledge of the app and its inner workings when removing parts. This is even more important when removing declarative parts (like model references) as those are often not directly run but only used for configuring other parts of the system.
Another approach which could be combined with the above is to try to refactor your app into distinguished features that you can turn on and off. Then you can turn features that are suspected to be unused off and check if nobody complains :)
And as a final note: you won't find a magic tool to do your full analysis. That's because no tool can know whether a certain piece of code is used by actual users or not. The only thing that tools can do is create (more or less) static reachability graphs, telling you if your code is somehow called from a certain point. With a dynamic language like Ruby even this is rather hard to achieve, as static analysis doesn't bring much insight in the face of meta-programming or dynamic calls that are heavily used in a rails context. So some tools actually run your code or try to get insight from test coverage. But there is definitely no magic spell.
So given the high internal (mostly hidden) complexity of a rails application, you will not get around to do most of the analysis by hand. The best advice would probably be to try to modularize your app and turn off certain modules to test f they are not used. This can be supported by proper integration tests.
Checkout the coverband gem, it does what you exactly what are you searching.
Maybe you can try to use rails_best_practices to check unused methods and class.
Here it is in the github: https://github.com/railsbp/rails_best_practices .
Put 'gem "rails_best_practices" ' in your Gemfile and then run rails_best_practices . to generate configuration file
I had the same problem and after exploring some alternatives I realized that I have all the info available out of the box - log files. Our log format is as follows
Dec 18 03:10:41 ip-xx-xx-xx-xx appname-p[7776]: Processing by MyController#show as HTML
So I created a simple script to parse this info
zfgrep Processing production.log*.gz |awk '{print $8}' > ~/tmp/action
sort ~/tmp/action | uniq -c |sort -g -r > ~/tmp/histogram
Which produced results of how often an given controller#action was accessed.
4394886 MyController#index
3237203 MyController#show
1644765 MyController#edit
Next step is to compare it to the list of all controller#action pair in the app (using rake routes output or can do the same script for testing suite)
You got already the idea to mark suspicious methods as private (what will maybe break your application).
A small variation I did in the past: Add a small piece code to all suspicious methods to log it. In my case it was a user popup "You called a obsolete function - if you really need please contact the IT".
After one year we had a good overview what was really used (it was a business application and there where functions needed only once a year).
In your case you should only log the usage. Everything what is not logged after a reasonable period is unused.
I'm not very familiar with Ruby and RoR, but what I'd suggest some crazy guess:
add :after_filter method wich logs name of previous called method(grab it from call stack) to file
deploy this to production
wait for a while
remove all methods that are not in log.
p.s. probably solution with Alt+F7 in NetBeans or RubyMine is much better :)
Metaprogramming
Object#method_missing
override Object#method_missing. Inside, log the calling Class and method, asynchronously, to a data store. Then manually call the original method with the proper arguments, based on the arguments passed to method_missing.
Object tree
Then compare the data in the data store to the contents of the application's object tree.
disclaimer: This will surely require significant performance and resource consideration. Also, it will take a little tinkering to get that to work, but theoretically it should work. I'll leave it as an exercise to the original poster to implement it. ;)
Have you tried creating a test suite using something like sahi you could then record all your user journies using this and tie those tests to rcov or something similar.
You do have to ensure you have all user journies but after that you can look at what rcov spits out and at least start to prune out stuff that is obviously never covered.
This isn't a very proactive approach, but I've often used results gathered from New Relic to see if something I suspected as being unused had been called in production anytime in the past month or so. The apps I've used this on have been pretty small though, and its decently expensive for larger applications.
I've never used it myself, but this post about the laser gem seems to talk about solving your exact problem.
mark suspicious methods as private. If that does not break the code, check if the methods are used inside the class. then you can delete things
It is not the perfect solution, but for example in NetBeans you can find usages of the methods by right click on them (or press Alt+F7).
So if method is unused, you will see it.

Why should we use coded ui when we have Specflow?

We have utilized Specflow and WatIn for acceptance tests at my current project. The customer wants us to use Microsoft coded-ui instead. I have never tested coded ui, but from what I've seen so far it looks cumbersome. I want to specify my acceptance tests up front, before I have a ui, not as a result of some record/playback stuff. Anyway, can someone please tell me why we should throw away the Specflow/watin combo and replace it with coded ui?
I've also read that you can combine specflow with coded ui, but it looks like a lot of overhead for something which I am already doing fine in specflow.
I wrote a blog post on how to do this you might find useful
http://rburnham.wordpress.com/2011/03/15/bdd-ui-automation-with-specflow-and-coded-ui-tests/
The pro's and con's of Coded UI Test that i can think of is your testing the application exactly how the user will be using it. This is good for acceptance test but it also has its limitations. Its also really good for end to end testing. In the past UI Tests have been know to be fragile. For example when MS created the VS2010 UI almost all of the UI tests broke. The main reason being is the technology change. Coded UI tests do help to limit this from happening by the way it matches a control. It uses more of a probability based match. This mean it will try to find the best match based on the information it has such as control name. For us Coded UI tests was our choice because of technology limitations. Our Legacy app is VB and although CUIT does not work great, i'm in the progress of writing an extension to get better control information, it was still our only choice. Also keep in mind CUIT is new and has its own limitations. You should be prepared to be very structured in the way you lay out your project as maintaining your UIMaps can be a bit of manual work due to the current end to end behaviour in VS2010, for example creating a CUIT from an existing action recording always places the test in a UIMap called UIMap.uitest and there is no way to change that or transfer to another UIMap. If you use multiple ui maps this means you will need to record your steps first and then use them in your test. However being in .net it its still very flexible.
By far the best thing about specflow is its gerkin syntax for readability and living documentation. Normally your testing features or behaviours of your app which is where the value comes from It generally aims the test just below the UI. There is a little less chance of the test breaking when the UI changes here but there. Specflow to me is great when your application is under constant change and you want to ensure existing features remain working. It fits well in a Scrum environment as well where you can write your scenario's as a description about how it should work. One limitation to specflow i can see is its open for interpretation. Because of this it can be easy to write a test that is not very reusable and hard to maintain. I like to use more generic terms to describe my steps like "Log in as User1" instead of "Go to Login Page, Enter Username and Password, Click login". Describing it more granular makes it harder to reuse tightly couples it to the UI. How the login actually work should be up to the code behind not the specflow feature.
Combining the 2 however to us seems more beneficial than just using Coded UI Tests. If we decide to completely change the UI we would at least have the behaviours that are expected stored in our specflow features in a way anyone can understand. In the end you need to consider how the application will evolve and the type of application.

Fluent mapping verification for Entity Framework 4

Note: This is a follow-up question for this previous question of mine.
Inspired by this blog post, I'm trying to construct a fluent way to test my EF4 Code-Only mappings. However, I'm stuck almost instantly...
To be able to implement this, I also need to implement the CheckProperty method, and I'm quite unsure on how to save the parameters in the PersistenceSpecification class, and how to use them in VerifyTheMappings.
Also, I'd like to write tests for this class, but I'm not at all sure on how to accomplish that. What do I test? And how?
Any help is appreciated.
Update: I've taken a look at the implementation in Fluent NHibernate's source code, and it seems like it would be quite easy to just take the source and adapt it to Entity Framework. However, I can't find anything about modifying and using parts of the source in the BSD licence. Would copy-pasting their code into my project, and changing whatever I want to suit my needs, be legal for non-commercial private or open source projects? Would it be for commercial projects?
I was going to suggest looking at how FluentNH does this, until I got to your update. Anyway, you're already investigating that approach.
As to the portion of your question regarding the BSD license, I'd say the relevant part of the license is this: Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: [conditions follow].
From my reading of that line, you can modify (which would include the removal of any code not relevant to your use cases) the code however you wish, and redistribute it as long as you meet the author's conditions.
Since there are no qualifications on how you may use or redistribute the code or binaries, then you are free to do that however you wish, for any and all applications.
Here and here are descriptions of the license in layman's terms.
I'm always writing simple set of integration tests for each entity. Tests are persisting, selecting, updating and deleting entity. I thing there is no better and easier way to test your mapping and other features of the model (like cascade deletes).

TFS Check in rules

Hi I'm out of setting up an TFS server and I want to set some check-in rules.
I for example want to be able to set rules about method lenght, complexity and so on, I found NDepend very convenient can I somehow use NDepend to run some rules on the files trying to check in.
I also want to be able to bypass the rules sometimes.
Are there any blogs or discussions around this, if it wont work with NDepend are there any other tools or ways I can use?
I would be very careful about this. I worked at a place once that had strict method length rules. If Calculate(a,b,c) ended up 1.5 times the limit length, the devs would just move the last third of the function into Calculate2() and call it from Calculate(). All the active locals would become parameters, of course - sometimes there would be a dozen of them. The resulting mess passed the automated tests for method length but were definitely not better or more maintainable than the long methods would have been.
Would it have been nice if the devs had spotted something refactorable in the middle of the method, pulled it out and given it a good name? Yes it would. But systems are all game-able, and the sorts of "dammit I just want to check in and go home" changes that are made to comply with method length rules (among others) make the code worse. A lot worse.
Also to bypass the rules there's a way on the checkin to say you are bypassing and why.

Resources