Running BeforeTestRun/AfterTestRun once on Specflow Runner - specflow

Does anyone have successfully setup a beforetestrun/aftertestrun hook when using Specflow runner on multithread using AppDomain
I saw some answer from this thread - Run BeforeTestRun and AfterTestRun only once using specflow with Selenium
Unfortunately, I'm having difficulty to set this up as we need it to configure our TestRail integration.

It is possible - you have to use a kernel-based lock (the following constructor is the one to use):
https://learn.microsoft.com/en-us/dotnet/api/system.threading.semaphore.-ctor?view=net-5.0#System_Threading_Semaphore__ctor_System_Int32_System_Int32_System_String_System_Boolean__
The flow:
Create a named instance of Semaphore
The thread where createdNew == true is the first thread that entered the code - execute test run init code here
For other threads use method 'WaitOne' with a proper timeout
Best regards,
PM

Related

How to close the newly created instance of webdriver in geb

I've been looking into some webdriver automation scripts using geb and spock. In few specs, I want another instance of webdriver with different capabilities so I'd not be using driver instance created by Geb by default. But when I use driver.close() in cleanup of test, GebReporting fails satating, NoSuchSessionException. I'm facing this issue after upgrading to firefox 74. Below is the script i've now.
Spec
Test
-- reset browser
-- clearchacheAndShutdownDriver
-- browser.driver = create new driver with new capabilities
cleanup : driver.close
Now after cleanup the driver is closed and GebReporting uses the driver instance which fails with NoSuchSessionException. Is there anyway to handle it in Geb way? instead of doing only clearcache, store the instance in other variable, restore it to browser.driver after closing the newly created driver instance.
I did found a post with similar question but it didn't help me.
Thanks in advance.
You need to call resetBrowser() on your spec instance before calling driver.close() to ensure that the Browser instance which is holding a reference to the WebDriver instance you are closing is detached from the spec and therefore no reporting is attempted at the end of the spec.

How to remote debug a Spring Cloud Data flow Task

We are using Spring XD for executing some batch jobs and considering to use Spring Cloud Dataflow. For this I wanted to remote debug a execution of a Task and I was not able to make it working.
I tried to export the following environment variable before the SCDF server is started:
spring.cloud.deployer.local.javaOpts=Xdebug -Xrunjdwp:transport=dt_socket,address=12201,server=y
Also tried to pass as argument in the GUI while invoking the task:
app.<appname>.local.javaOpts=Xdebug -Xrunjdwp:transport=dt_socket,address=12201,server=y
Nothing seems to be working.
I'm able to debug the composed-task-runner launched by SCDF using the listen debugger mode, this will also work for your task as well.
Run Debugger in your IDE in listen mode on port 5006. (this project's classpath should have composed-task-runner sources, put break point some where )
Run SCDF with -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 option, attach debugger to the SCDF process in your IDE on port 5005 (attach mode).
Put breakpoint at this line :
String javaOptsString = getValue(deploymentProperties, "javaOpts");
in JavaCommandBuilder class (for spring-cloud-deployer-local v.1.3.0.M2 it's line #83).
Launch your task - debugger stops at breakpoint.
Step Over once in your IDE, the value of javaOptsString is null now. Using IDE, set the value of javaOptsString to
-agentlib:jdwp=transport=dt_socket,server=n,address=localhost:5006,suspend=y
Press Resume in IDE.
Your breakpoint set in #1 should be hit in few seconds.
If you know how to pass javaOpts as deployment properties of your task - you will be able to debug in listen mode without this nightmare ;-). I've not found a way to escape = and , characters in the -agentlib:jdwp=transport=dt_socket,server=n,address=localhost:5006,suspend=y javaOpts deployment property.
We are working on an improved solution for the local-deployer - you can follow spring-cloud/spring-cloud-dataflow#369 for tracking purpose.
There is, however, the following option that exists to aggregate all the application logs into the server console directly, which might be useful while in active development.
stream deploy --name myStream --properties "deployer.*.local.inheritLogging=true"
Finally I was able to remote debug a composed task or regular task. Follow the below steps:
In scdf UI go to tasks and click on the definition section
Click play button (invoke) on the task/composed task that you want to invoke.
On the launch task page define your task arguments
Add the following properties by clicking 'Add Property' button:
- deployer.composed-task-runner.local.debugPort=12103
- deployer.composed-task-runner.local.debugSuspend=y
Now launch the task
You can now see in the log that when the composed task's java process is launched it is called with the debug parameter.
If you want to control the heap memory or any java options you can do by adding the following property:
deployer.composed-task-runner.local.javaOpts=Xmx2048M
Note that 'composed-task-runner' is the name of the App (Not the name of the task).

SDN 4.1 - Multithreading Neo4j TestServer for parallel execution of test cases

I'm using org.neo4j.ogm.testutil.TestServer and the http driver for integration testing instead of the Embedded driver because I like how the TestServer provides a browser based interface to see what is happening with each test.
However - my tests take ages! The build is getting up to around 30 minutes on a reasonably quick machine.
What I'd like to do is use the maven surefire plugin to execute my test cases in parallel.
To do this I imagine I'll need to be able to startup several neo4j TestServer instances, each on a different port.
Where is the best place to do this using neo4j 4.1? I assume #Before and #After (for shutdown) methods of my test cases? (possibly extracted into a super class?)
Also, how would I get the current port for the current test context into each unit test?
Any suggestions of how to go about this would be greatly appreciated :)
Have a look at org.neo4j.ogm.testutil.MultiDriverTestClass which sets up the Driver using the TestServer.
The TestServer should pick an available port anyway which should solve your problem of setting these up in parallel. In fact, you can just have your test class extend org.neo4j.ogm.testutil.MultiDriverTestClass (most of the tests in org.neo4j.ogm.persistence.examples do this) and provide an ogm.properties file that specifies that the driver to be used is the HTTP Driver
driver=org.neo4j.ogm.drivers.http.driver.HttpDriver

Prevent setup project from uninstalling busy service

How do I prevent my setup project from uninstalling my Windows service while it is performing a lengthy work routine?
Ideally, MSI should report that the "Service is currently busy and cannot be uninstalled."
How to create a condition for the installer to check if the service is busy and fail the installation?
You could maybe use an Installer class for your application. You could override the OnBeforeUninstall method so that it looks to see if the process is running, and then waits for it to stop before proceeding.
A similar solution to the one YWE posted is to create a custom action inside a DLL and run it when the uninstall process starts. In the custom action you could interrogate the service to check it's status and if it's busy fail the install with a relevant message for the user.
Walkthrough: Creating a custom action

How to tell if process is run by the Service Control Manager

I have a few Windows Services written in C# that I have setup to support being run from the command line as a console app if a specific parameter is passed. Works great but I would love to be able to detect whether the app is being run by the service control mananger or from a command line.
Is there any way to tell at runtime if my app was started by the SCM?
Environment.UserInteractive will return false if the process is running under the SCM.
The SCM will call your OnStart method, so you could mark that event and make sure when you run from the command line, you don't call OnStart. Or, you could check the startup parameters to see how the application was started.
In C the function StartServiceCtrlDispatcher() will fail with ERROR_FAILED_SERVICE_CONTROLLER_CONNECT. This is the best way in C, wonder if C# exposes any of this?
ERROR_FAILED_SERVICE_CONTROLLER_CONNECT
This error is returned if the program is being run as a console application rather than as a service. If the program will be run as a console application for debugging purposes, structure it such that service-specific code is not called when this error is returned.

Resources