I am unable to find a way that will give me a code coverage report for JSR 303 validation.
I've scoured the web and i cannot find a single reference to any attempted solution to this.
Im frankly at a complete and utter loss in how to attempt this.##
Would anyone know of way to attempt this?
Im ok with even changing my coverage tool if necessary. I use Cobertura and Jacoco.
Thanks for the help.
I think that the question is about validating that test cases cover all the expressed validation rules. I'm using the xml configuration, and have the same question.
My best idea, so far, would be to use an aspect on the
<V extends ConstraintValidator>.isValid() method (pardon the syntax), and use the ContraintValidatorContext to figure out which rule is being processed. I haven't tried this, I'm not even sure this would work.
Related
Compiler error messages usually include lots of human parseable information about the underlying error. I have custom rules in which I would like to additionally expose this information in a machine parseable manner. This would allow things like integration with my editor showing me the locations that need to be fixed.
What is the recommended way of doing this? The best thing that I can come up with is to have a fairly simple structure that meshes well with the human readable part and include it in stdout/stderr and parse that. But this seems much more error prone than including a machine parseable output. But given that actions fail in a binary fashion, there cannot be any output files available, and I cannot think of any other mechanism to get data out.
Take a look at the Build Event Protocol. Consuming "Progress" messages could be useful here.
I have the following situation:
I have a domain and I had to remove a field from it, but this field is referenced in several different locations and grails does not capture this error during compilation.
The STS IDE does underscores these fields, but it would be totally impractical to sweep the entire application looking for these flagged errors.
Is there another way I can catch these errors?
Good test coverage is going to be your best way of making sure you've captured everything when you make changes like this.
Make sure you're utilizing your IDE's refactoring functionality. It won't always catch everything, but will help. Also, read #1.
Do a search in STS for this field and clean up. This is better than manual. Also, read #1.
Read #1
I try to learn Rspec, but I don't understand what is that. Let me explain. I have read many articles and blogs and I was able to understand a few things(basic terms, how to install, how to use and other). But I don't understand the main. What is behavior? Question may seem absurd but I realy don't understand this.
For example I have simple rails app. Blog. Create articles, comments etc. What is behavior there?
This example maybe is not good.
I can not understand the essence of behavior. What mean this word for object(acticles, comments)?
Can explain me this? Maybe can someone show some examples? What behavior need to testing? And what is behavior?
The simplest explanation of behavior I see is the following.
In OOP objects send and receive messages. Following receive of a message, object behaves, i.e. it changes it's state or sends messages to another objects.
Testing the behavior you should check if the object behaves correspondingly to a message it received.
BDD states: you first define the behavior via a spec and then write the code to enable object behave as intended.
Rspec is having good thing is behaviour based writing specs. It is an reusability specs can be created and used by sharing of different specs. It is normally called as shared examples in the view of specs. Just follow the links for your tutorial
http://blog.davidchelimsky.net/2010/11/07/specifying-mixins-with-shared-example-groups-in-rspec-2/
https://www.relishapp.com/rspec/rspec-core/docs/example-groups/shared-examples
Note: This is a follow-up question for this previous question of mine.
Inspired by this blog post, I'm trying to construct a fluent way to test my EF4 Code-Only mappings. However, I'm stuck almost instantly...
To be able to implement this, I also need to implement the CheckProperty method, and I'm quite unsure on how to save the parameters in the PersistenceSpecification class, and how to use them in VerifyTheMappings.
Also, I'd like to write tests for this class, but I'm not at all sure on how to accomplish that. What do I test? And how?
Any help is appreciated.
Update: I've taken a look at the implementation in Fluent NHibernate's source code, and it seems like it would be quite easy to just take the source and adapt it to Entity Framework. However, I can't find anything about modifying and using parts of the source in the BSD licence. Would copy-pasting their code into my project, and changing whatever I want to suit my needs, be legal for non-commercial private or open source projects? Would it be for commercial projects?
I was going to suggest looking at how FluentNH does this, until I got to your update. Anyway, you're already investigating that approach.
As to the portion of your question regarding the BSD license, I'd say the relevant part of the license is this: Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: [conditions follow].
From my reading of that line, you can modify (which would include the removal of any code not relevant to your use cases) the code however you wish, and redistribute it as long as you meet the author's conditions.
Since there are no qualifications on how you may use or redistribute the code or binaries, then you are free to do that however you wish, for any and all applications.
Here and here are descriptions of the license in layman's terms.
I'm always writing simple set of integration tests for each entity. Tests are persisting, selecting, updating and deleting entity. I thing there is no better and easier way to test your mapping and other features of the model (like cascade deletes).
Is there any tangible value in unit testing your own htmlhelpers? Many of these things just spit out a bunch of html markup - there's little if no logic. So, do you just compare one big html string to another? I mean, some of these thing require you to look at the generated markup in a browser to verify it's the output you want.
Seems a little pointless.
Yes.
While there may be little to no logic now, that doesn't mean that there isn't going to be more logic added down the road. When that logic is added, you want to be sure that it doesn't break the existing functionality.
That's one of the reasons that Unit Tests are written.
If you're following Test Driven Development, you write the test first and then write the code to satisfy test.
That's another reason.
You also want to make sure you identify and test any possible edge cases with your Helper (like un-escaped HTML literals, un-encoded special characters, etc).
I guess it depends on how many people will be using/modifying it. I typically create a unit test for an html helper if I know a lot of people could get their hands on it, or if the logic is complex. If I'm going to be the only one using it though, I'm not going to waste my time (or my employer's money).
I can understand you not wanting to write the tests though ... it can be rather annoying to write a few lines of html generation code that requires 5X that amount to test.
it takes a simple input and exposes a simple output. This is a good one for TDD, since the time you were going to spend on build->start site->fix that silly issue->start again->oops, missed this other tiny thing->start ... we are done, happy :). Dev 2 comes along and makes small change to "fix" it for something that wasn't working for then, same cycle goes on and dev 2 didn't notice at the time it broke your other scenarios.
Instead, you v. quickly do the v. simple simple text, y that simple output gave you that simple output you were expecting with all the closing tags and quotes you were expecting.
Having written HTML Helpers for sitemap menus, for example, or buttons for a wizard framework, I can assure you that some Helpers have plenty of logic that needs testing to be reliable, especially if intended to be used by others.
So it depends what you do with them really. And only you know the answer to that.
The general answer is that Html Helpers can be arbitrarily complex (or simple), depending on what you are doing. So the no brainer, as with anything else, is to test when you need to.
Yes, there's value. How much value is to be determined. ;-)
You might start with basic "returns SOMEthing" tests, and not really care WHAT. Basically just quick sanity tests, in case something fundamental breaks. Then as problems crop up, add more details.
Also consider having your tests parse the HTML into DOMs, which are much easier to test against than strings, particularly if you are looking for just some specific bit.
Or... if you have automated tests against the webapp itself, ensure there are tests that look specifically for the output of your helpers.
Yes it should be tested. Basic rule of thumb, if it is not worth testing it is not worth writing.
However, you need to be a bit carefull here when you write your tests. There is a danger that they can be very "brittle".
If you write your tests such that you get back a specific string, and you have some helpers that call other helpers. A change in one of the core helpers could cause very many tests to fail.
So it maybe better to test that you get back a non null value, or that a specific text is contained somewhere in the return value. Rather than testing for an exact string.