I have junit testscript that creates different and unique ID. So when it finds an existing ID or a wrong Id I want the test script report via ANT to show that it is failed for following record but passed for the rest of the records that are correct.
#Test
public void testCreateTrade() throws Exception
driver.findElement(By.id("VIN")).clear();
driver.findElement(By.id("VIN")).sendKeys(vVin);
String str = driver.getCurrentUrl();
if(str.contains("step1")) // for existing ID
{
driver.findElement(By.cssSelector("body > div.bootbox.modal.in > div.modal-footer > a.btn.null")).click();
break;
}
driver.findElement(By.id("mileage")).sendKeys(vMileage);
driver.findElement(By.id("odometerType")).sendKeys(vKm);
driver.findElement(By.id("passengers")).sendKeys(vPassengers);
driver.findElement(By.id("exteriorColor")).sendKeys(vExterior);
driver.findElement(By.id("interiorColor")).sendKeys(vInterior);
driver.findElement(By.id("hasAccident")).sendKeys(vAccident);
driver.findElement(By.id("dealerSalesPerson")).sendKeys(vSalesPerson);
driver.findElement(By.id("step3btn")).click();
Thread.sleep(1000);
String str3 = driver.getCurrentUrl();
if(str3.contains("step2")) // Loop for wrong ID
{
driver.findElement(By.linkText("Create")).click();
driver.findElement(By.xpath("html/body/div[7]/div[2]/a[1]")).click();
//System.out.println("Is a wrong Vin"+vVin);
break;
}
driver.findElement(By.id("step4btn")).click();
driver.findElement(By.id("windshieldCondition")).sendKeys(vWindshield);
driver.findElement(By.id("tireCondition")).sendKeys(vTire);
driver.findElement(By.id("accidentBrand3")).sendKeys(vAcBrand);
driver.findElement(By.id("confirmedParked")).click();
If you want a single test case to continue running after it "fails" and then report its exceptions at the end, use ErrorCollector.
#RunWith(JUnit4.class) public class YourTestClass {
#Rule public ErrorCollector errorCollector = new ErrorCollector();
#Test public void yourTest() {
// ... (your setup)
for (Record record : expectedRecords) {
if (dataSource.hasRecord(record.getId())) {
Record fetchedRecord = dataSource.getRecord(record.getId());
errorCollector.checkThat(record, matchesRecordValuesOf(record));
} else {
errorCollector.addError(new IllegalStateException(""));
}
}
}
}
Note, however, that it's always preferable to test exactly one thing per unit test. Here it may make sense, but don't overuse ErrorCollector where refactoring and splitting the test makes more sense.
Related
There are some database operations I need to execute before the end of the final attempt of my Hangfire background job (I need to delete the database record related to the job)
My current job is set with the following attribute:
[AutomaticRetry(Attempts = 5, OnAttemptsExceeded = AttemptsExceededAction.Delete)]
With that in mind, I need to determine what the current attempt number is, but am struggling to find any documentation in that regard from a Google search or Hangfire.io documentation.
Simply add PerformContext to your job method; you'll also be able to access your JobId from this object. For attempt number, this still relies on magic strings, but it's a little less flaky than the current/only answer:
public void SendEmail(PerformContext context, string emailAddress)
{
string jobId = context.BackgroundJob.Id;
int retryCount = context.GetJobParameter<int>("RetryCount");
// send an email
}
(NB! This is a solution to the OP's problem. It does not answer the question "How to get the current attempt number". If that is what you want, see the accepted answer for instance)
Use a job filter and the OnStateApplied callback:
public class CleanupAfterFailureFilter : JobFilterAttribute, IServerFilter, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
try
{
var failedState = context.NewState as FailedState;
if (failedState != null)
{
// Job has finally failed (retry attempts exceeded)
// *** DO YOUR CLEANUP HERE ***
}
}
catch (Exception)
{
// Unhandled exceptions can cause an endless loop.
// Therefore, catch and ignore them all.
// See notes below.
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
// Must be implemented, but can be empty.
}
}
Add the filter directly to the job function:
[CleanupAfterFailureFilter]
public static void MyJob()
or add it globally:
GlobalJobFilters.Filters.Add(new CleanupAfterFailureFilter ());
or like this:
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new CleanupAfterFailureFilter () };
};
app.UseHangfireServer(options, storage);
Or see http://docs.hangfire.io/en/latest/extensibility/using-job-filters.html for more information about job filters.
NOTE: This is based on the accepted answer: https://stackoverflow.com/a/38387512/2279059
The difference is that OnStateApplied is used instead of OnStateElection, so the filter callback is invoked only after the maximum number of retries. A downside to this method is that the state transition to "failed" cannot be interrupted, but this is not needed in this case and in most scenarios where you just want to do some cleanup after a job has failed.
NOTE: Empty catch handlers are bad, because they can hide bugs and make them hard to debug in production. It is necessary here, so the callback doesn't get called repeatedly forever. You may want to log exceptions for debugging purposes. It is also advisable to reduce the risk of exceptions in a job filter. One possibility is, instead of doing the cleanup work in-place, to schedule a new background job which runs if the original job failed. Be careful to not apply the filter CleanupAfterFailureFilter to it, though. Don't register it globally, or add some extra logic to it...
You can use OnPerforming or OnPerformed method of IServerFilter if you want to check the attempts or if you want you can just wait on OnStateElection of IElectStateFilter. I don't know exactly what requirement you have so it's up to you. Here's the code you want :)
public class JobStateFilter : JobFilterAttribute, IElectStateFilter, IServerFilter
{
public void OnStateElection(ElectStateContext context)
{
// all failed job after retry attempts comes here
var failedState = context.CandidateState as FailedState;
if (failedState == null) return;
}
public void OnPerforming(PerformingContext filterContext)
{
// do nothing
}
public void OnPerformed(PerformedContext filterContext)
{
// you have an option to move all code here on OnPerforming if you want.
var api = JobStorage.Current.GetMonitoringApi();
var job = api.JobDetails(filterContext.BackgroundJob.Id);
foreach(var history in job.History)
{
// check reason property and you will find a string with
// Retry attempt 3 of 3: The method or operation is not implemented.
}
}
}
How to add your filter
GlobalJobFilters.Filters.Add(new JobStateFilter());
----- or
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new JobStateFilter() };
};
app.UseHangfireServer(options, storage);
Sample output :
I'm trying to figure out the best way to build my unit tests for an MVC app. I created a simple model and interface, which is used by the controller constructors so that the testing framework (Nsubstitute) can pass a mocked version of the repository. This test passes, as expected.
My problem is now I want to take this a step further and test the file I/O operations in the "real" instantiation of IHomeRepository. This implementation should read a value from a file in the App_Data directory.
I've tried building a test without passing a mocked version of IHomeRepsotory in, however HttpContext.Current is null when I run my test.
Do I need to mock HttpContext? Am I even going about this in the right way?
//The model
public class VersionModel
{
public String BuildNumber { get; set; }
}
//Interface defining the repository
public interface IHomeRepository
{
VersionModel Version { get; }
}
//define the controller so the unit testing framework can pass in a mocked reposiotry. The default constructor creates a real repository
public class HomeController : Controller
{
public IHomeRepository HomeRepository;
public HomeController()
{
HomeRepository = new HomeRepoRepository();
}
public HomeController(IHomeRepository homeRepository)
{
HomeRepository = homeRepository;
}
.
.
.
}
class HomeRepoRepository : IHomeRepository
{
private VersionModel _version;
VersionModel IHomeRepository.Version
{
get
{
if (_version == null)
{
var absoluteFileLocation = HttpContext.Current.Server.MapPath("~/App_Data/repo.txt");
if (absoluteFileLocation != null)
{
_version = new VersionModel() //read the values from file (not shown here)
{
BuildNumber = "value from file",
};
}
else
{
throw new Exception("path is null");
}
}
return _version;
}
}
}
[Fact]
public void Version()
{
// Arrange
var repo = Substitute.For<IHomeRepository>(); //using Nsubstitute, but could be any mock framework
repo.Version.Returns(new VersionModel
{
BuildNumber = "1.2.3.4",
});
HomeController controller = new HomeController(repo); //pass in the mocked repository
// Act
ViewResult result = controller.Version() as ViewResult;
var m = (VersionModel)result.Model;
// Assert
Assert.True(!string.IsNullOrEmpty(m.Changeset));
}
I believe you want test the real instantiation of IHomeRepository, which connects to a real database. In that case you need an App.config file, which specify the connection string. This is not a Unit test and it would an Integration Test. With HttpContext being null, you still can fake the HttpContext, retrieve real data from the database. See also here.
I'm new to the Testacular(now Karma). But I found it is really powerful and great for automatic cross-browser JS testing. So I want to know if it is possible to use it as part of TFS building procedure to conduct automatic JS code unit testing? If anyone has previous experience, could you please let us know what to notice so that we are not going to take the wrong way.
Regards,
Jun
Here is my pseudo code to run the karma in TFS using C# helper class. The basic idea is:
Use C# unit test to test your js files using Karma.
Capture the output of Karma to show that in your build log.
Use separate process to run Karma.
Pack all Karma files into a zip file, extract that into temporary folder for each build, so that builds with different version of karma wouldn't conflict with each other.
Clean the temp folder after build.
-
namespace Test.Javascript.CrossBrowserTests
{
public class KarmaTestRunner : IDisposable
{
private const string KarmaPath = #".\node_modules\karma\bin\karma";
private string NodeBasePath { get; set; }
private string NodeFullPath { get { return NodeBasePath + #"\node\node.exe"; } }
private string NpmFullPath { get { return NodeBasePath + #"\node\npm.cmd"; } }
public KarmaTestRunner()
{
ExtractKarmaZip();
LinkGlobalKarma();
}
public int Execute(params string[] arguments)
{
Process consoleProcess = RunKarma(arguments);
return consoleProcess.ExitCode;
}
public void Dispose()
{
UnlinkGlobalKarma();
RemoveTempKarmaFiles();
}
private void ExtractKarmaZip()
{
NodeBasePath = Path.GetTempPath() + Path.GetRandomFileName();
byte[] resourceBytes = Assembly.GetExecutingAssembly().GetEmbeddedResourceBytes(typeof(KarmaTestRunner).Namespace + "." + "karma0.9.4.zip");
ZipFile file = ZipFile.Read(resourceBytes);
file.ExtractAll(NodeBasePath);
}
private void LinkGlobalKarma()
{
ExecuteConsoleProcess(NpmFullPath, "link", "karma");
}
private Process RunKarma(IEnumerable<string> arguments)
{
return ExecuteConsoleProcess(NodeFullPath, new[] { KarmaPath }.Concat(arguments).ToArray());
}
private static Process ExecuteConsoleProcess(string path, params string[] arguments)
{
//Create a process to run karma with arguments
//Hook up the OutputDataReceived envent handler on the process
}
static void OnOutputLineReceived(string message)
{
if (message != null)
Console.WriteLine(message);
}
private void UnlinkGlobalKarma()
{
ExecuteConsoleProcess(NpmFullPath, "uninstall", "karma");
}
private void RemoveTempKarmaFiles()
{
Directory.Delete(NodeBasePath, true);
}
}
}
Then use it like this:
namespace Test.Javascript.CrossBrowserTests
{
[TestClass]
public class CrossBrowserJSUnitTests
{
[TestMethod]
public void JavascriptTestsPassForAllBrowsers()
{
using (KarmaTestRunner karmaRunner = new KarmaTestRunner())
{
int exitCode = karmaRunner.Execute("start", #".\Test.Project\Javascript\Karma\karma.conf.js");
exitCode.ShouldBe(0);
}
}
}
}
A lot has changed since the original question and answer.
However, we've gotten Karma to run in our TFS build by running a Grunt task (I'm sure the same is possible with Gulp/whatever task runner you have). We were using C# before, but recently changed.
Have a grunt build task run.
Add a Grunt task after that
point the file path to your gruntfile.js and run your test task. This task will run karma:single. The grunt-cli location may be node_modules/grunt-cli/bin/grunt.
grunt.registerTask('test', [
'karma:single'
]);
Add a Publish Test Results step. Test Results Files = **/*.trx
More information about publishing Karma Test Results
We're attempting to clean up a big bunch of brown field code, while at the same time a team is adding new functionality. We'd like to make sure changed and new code is cleaned from any compiler/code analysis or other warnings, but there's too many of them to begin by cleaning up the current solution.
We're using TFS 2010.
So the following was proposed:
Write/select a build activity which compares the list of warnings in the build against the lines of code that changed with that check-in.
If the warning provides a line number, and that line number was changed, fail the build.
I understand this will not find all new warnings and things introduced in other parts of the code will not be flagged, but it's at least something.
Another option that was proposed:
Compare the list of warnings of the previous known good build against the list of this build. If there are new warnings (track on file name level), fail the build.
Any known Actions out there that might provide said functionality?
Any similar Actions that can act on Code Coverage reports?
This following activity is just a basic approach, that returns false if your current build has less or equal warnings than your last build and true if they have risen.Another activity that can locate new warnings and/or present with their location in code would clearly be superior, yet I thought this might be an interesting startpoint:
using System;
using System.Activities;
using Microsoft.TeamFoundation.Build.Client;
using Microsoft.TeamFoundation.Build.Workflow.Activities;
namespace CheckWarnings
{
[BuildActivity(HostEnvironmentOption.Agent)]
public sealed class CheckWarnings : CodeActivity<bool>
{
[RequiredArgument]
public InArgument<IBuildDetail> CurrentBuild { get; set; } //buildDetail
public InArgument<string> Configuration { get; set; } //platformConfiguration.Configuration
public InArgument<string> Platform { get; set; } //platformConfiguration.Platform
protected override bool Execute(CodeActivityContext context)
{
IBuildDetail currentBuildDetail = context.GetValue(CurrentBuild);
string currentConfiguration = context.GetValue(Configuration);
string currentPlatform = context.GetValue(Platform);
Uri lastKnownGoodBuildUri = currentBuildDetail.BuildDefinition.LastGoodBuildUri;
IBuildDetail lastKnownGoodBuild = currentBuildDetail.BuildServer.GetBuild(lastKnownGoodBuildUri);
int numOfCurrentWarnings = GetNumberOfWarnings(currentBuildDetail, currentConfiguration, currentPlatform);
context.TrackBuildMessage("Current compile presents " + numOfCurrentWarnings + " warnings.", BuildMessageImportance.Normal);
int numOfLastGoodBuildWarnings = GetNumberOfWarnings(lastKnownGoodBuild, currentConfiguration,
currentPlatform);
context.TrackBuildMessage("Equivalent last good build compile presents " + numOfLastGoodBuildWarnings + " warnings.", BuildMessageImportance.Normal);
if (numOfLastGoodBuildWarnings < numOfCurrentWarnings)
{
return true;
}
return false;
}
private static int GetNumberOfWarnings(IBuildDetail buildDetail, string configuration, string platform)
{
var buildInformationNodes =
buildDetail.Information.GetNodesByType("ConfigurationSummary");
foreach (var buildInformationNode in buildInformationNodes)
{
string localPlatform, numOfWarnings;
string localConfiguration = localPlatform = numOfWarnings = "";
foreach (var field in buildInformationNode.Fields)
{
if (field.Key == "Flavor")
{
localConfiguration = field.Value;
}
if (field.Key == "Platform")
{
localPlatform = field.Value;
}
if (field.Key == "TotalCompilationWarnings")
{
numOfWarnings = field.Value;
}
}
if(localConfiguration == configuration && localPlatform == platform)
{
return Convert.ToInt32((numOfWarnings));
}
}
return 0;
}
}
}
Note that this activity doesn't provide with exception handling and should further be refined, in case your build definitions build more than one solutions.It takes three input args (buildDetail, platformConfiguration.Configuration and platformConfiguration.Platform) and should be placed directly after the Run MSBuild activity.
I have a grails application that has a service that creates reports. The report is defined as:
class Report {
Date createDate
String reportType
List contents
static constraints = {
}
}
The service generates a report and populates contents as a list that is returned by createCriteria.
My problem is that my service claims to be saving the Report, no errors turn up, logging says that its all there, but when I go to call show from the controller on that report, it says contents is null.
Another relevant bit, my Service is called by an ActiveMQ message queue. The message originating from my report controller.
Controller:
class ReportController {
def scaffold = Report
def show = {
def rep = Report.get(params.id)
log.info("Report is " + (rep? "not null" : "null")) //says report is not null
log.info("Report content is " + (rep.contents? "not null" : "null")) //always says report.contents is null.
redirect(action: rep.reportType, model: [results: rep.contents, resultsTotal: rep.contents.size()])
}
}
My service that creates the report:
class ReportService {
static transactional = false
static expose = ['jms']
static destination = "Report"
void onMessage(msg)
{
this."$msg.reportType"(msg)
}
void totalQuery(msg)
{
def results = Result.createCriteria().list {
//This returns exactly what i need.
}
Report.withTransaction() {
def rep = new Report(createDate: new Date(), reportType: "totalQuery", contents: results)
log.info("Validation results: ${rep.validate()}")
if( !rep.save(flush: true) ) {
rep.errors.each {
log.error(it)
}
}
}
}
Is there something obvious that I'm missing here? My thought is that since all my unit tests work, that the hibernate context is not being passed through the message queue. But that would generate Exceptions wouldn't it? I've been beating my head on this problem for days, so a point in the right direction would be great.
Thanks,
You can't define an arbitrary List like that, so it's getting ignored and treated as transient. You'd get the same behavior if you had a def name field, since in both cases Hibernate doesn't know the data type, so it has no idea how to map it to the database.
If you want to refer to a collection of Results, then you need a hasMany:
class Report {
Date createDate
String reportType
static hasMany = [contents: Result]
}
If you need the ordered list, then also add in a List field with the same name, and instead of creating a Set (the default), it will be a List:
class Report {
Date createDate
String reportType
List contents
static hasMany = [contents: Result]
}
Your unit tests work because you're not accessing a database or using Hibernate. I think it's best to always integration test domain classes so you at least use the in-memory database, and mock the domain classes when testing controllers, services, etc.