I run multiple TestNG 7.4.0 suites in parallel using something like (follows kotlin code, but it is the same in java, there is no language dependency):
val testNG = TestNG()
with(testNG) {
setXmlSuites(allMyXmlSuites)
suiteThreadPoolSize = threadCount
}
LISTENERS.forEach { testNG.addListener(it) }
testNG.run()
In my suites i configure the parent module like this
suite.parentModule = SuiteParentModule::class.java.name
My problem is that inside SuiteParentModule I have a singleton that is invoked multiple time, exactly the number of times that are the suites invocation. So I guess every suite has an independent instance of Injector. Here the provider method that logs multiple time:
#Provides
#Singleton
fun provideEnvironmentUrls(): EnvironmentUrls =
EnvironmentUrls(
System.getProperty("url")
).also { logger.info("Using default $it") }
Is there any way to make sure the dependency injection container provided by TestNg using Guice will remain the same, thus having real singletones provided?
I found that, even if using TestNG.setParentModule(), multiple suites will have multiple injectors.
I managed to get only one injector created by moving my code into a single suite.
Further information can be obtained at the issue opened in the TestNG git.
Related
We are trying to use our logback.xml that we use in GCP Cloud run which has amazing filtering features. Our logback.xml contains this for cloud run
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="com.orderlyhealth.api.logging.logback.GCPCloudLoggingJSONLayout">
<pattern>${CONSOLE_PATTERN}</pattern>
</layout>
</encoder>
</appender>
And our GCPCloudLoggingJSONLayout does a great job at setting all the things we need like clientId, customerRequestId, etc. etc. and we can filter across many many microservices on one customer or one customer request. We lose this in dataflow currently though. We tried adding logback.xml to src/main/resources and deploying the project seems to use it in the shell like so
{"message":"[main][-][:] o.a.b.r.d.DataflowRunner Template successfully created.\n",
"logger":"org.apache.beam.runners.dataflow.DataflowRunner",
"transactionId":null,"socket":null,"clntSocket":null,
"version":null,
"timestamp":{"seconds":1619694798,"nanos":4000000},
"thread":"main",
"severity":"INFO",
"instanceId":null,
"headers":{},
"messageInfo":{"message":"Message short enough. Displayed top level"}
}
thanks for any ideas on modifying dataflow logging.
Currently we see this instead which is not nearly as useful for tracing the customer request through systems
I don't think you can change how Dataflow logs to Cloud logging.
Instead, you can change how/what you log and let Dataflow pass them through to cloud logging. See Logging pipeline messages.
Or you can use cloud logging client libraries in your pipeline directly: https://cloud.google.com/logging/docs/reference/libraries.
Please take a look at How to override Google DataFlow logging with logback? for the latest version of this answer
I copied the current answer there to make it easier for folks who want to look:
Dataflow relies on using java.util.logging (aka JUL) as the logging backend for SLF4J and adds various bridges ensuring that logs from other libraries are output as well. With this kind of setup, we are limited to adding any additional details to the log message itself only.
This also applies to any runner executing a portable job since the container with the SDK harness has a similar logging configuration. For example Dataflow Runner V2.
To do this we want to create a custom formatter to apply to the root JUL logger. For example:
public class CustomFormatter extends SimpleFormatter {
public String formatMessage(LogRecord record) {
// implement whatever logic the is needed to add details to the message portion of the log statement
return super.formatMessage(record);
}
}
And then during start-up of the worker we need to update the root logger to use this formatter. We can achieve this using a JvmInitializer and implement the beforeProcessing method like so:
#AutoService(JvmInitializer.class)
public class LoggerInitializer implements JvmInitializer {
public void beforeProcessing(PipelineOptions options) {
LogManager logManager = LogManager.getLogManager();
Logger rootLogger = logManager.getLogger("");
for (Handler handler : rootLogger.getHandlers()) {
handler.setFormatter(new CustomFormatter());
}
}
}
Is it possible to define/specify a runner when starting tests from cucumber's command line(cucumber.api.cli.Main)?
My reason for this is so i can generate xml reports in Jenkins and push the results to ALM Octane.
I kind of inherited this project and its using gradle to do a javaexect and call cucumber.api.cli.Main
I know its possible to do this with #RunWith(OctaneCucumber.class) when using JUnit runner + maven (or only JUnit runner), otherwise that tag is ignored. I have the custom runner with that tag but when i run from cucumber.api.cli.Main i can't find a way to run with it and my tag just gets ignored.
What #Grasshopper suggested didn't exactly work but it made me look in the right direction.
Instead of adding the code as a plugin, i managed to "hack/load" the octane reporter by creating a copy of the cucumber.api.cli.Main, using it as a base to run the cli commands and change a bit the run method and add the plugin at runtime. Needed to do this because the plugin required quite a few parameters in its constructor. Might not be the perfect solution, but it allowed me to keep the gradle build process i initially had.
public static byte run(String[] argv, ClassLoader classLoader) throws IOException {
RuntimeOptions runtimeOptions = new RuntimeOptions(new ArrayList<String>(asList(argv)));
ResourceLoader resourceLoader = new MultiLoader(classLoader);
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
Runtime runtime = new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
//====================Added the following lines ================
//Hardcoded runner(?) class. If its changed, it will need to be changed here also
OutputFile outputFile = new OutputFile(Main.class);
runtimeOptions.addPlugin(new HPEAlmOctaneGherkinFormatter(resourceLoader, runtimeOptions.getFeaturePaths(), outputFile));
//==============================================================
runtime.run();
return runtime.exitStatus();
}
Problem statement:
How to execute a function at the end of all specification files have been executed using spock framework.
Explantion: I am using geb-spock framework for automation.
I have few specification files. I want to run a function after all specification files have been executed.
I want something like AfterSuite in TestNG. How can i get the feature of AfterSuite in spock. cleanupSpec will be called after every specification file is executed.
Thanks,
Debasish
The simple answer is: no. There's nothing like before or after suite methods in spock since spock is JUnit based and JUnit does not handle such methods. If you siÄ™ tool like maven or gradle maybe you can use task's lifecycle methods.
You can use the JUnit 4 SuiteRunner Look at this answer
#RunWith(Suite.class)
#SuiteClasses({ TestSpec.class, TestSpec.class })
public class CompleteTestSuite {
#BeforeClass
public static void setUpClass() {
System.out.println("Master setup");
}
#AfterClass public static void tearDownClass() {
System.out.println("Master tearDown");
}
}
If you are using build tool as for example Gradle you may wire it on a build configuration level after tests are finished.
I use the OWASP Dependency Check from its ant task (no Gradle support yet) like this:
task checkDependencies() {
ant.taskdef(name: 'checkDependencies',
classname: 'org.owasp.dependencycheck.taskdefs.DependencyCheckTask',
classpath: 'scripts/dependency-check-ant-1.2.5.jar')
ant.checkDependencies(applicationname: "MyProject",
reportoutputdirectory: "generated",
dataDirectory: "generated/dependency-check-cache") {
fileset(dir: 'WebContent/WEB-INF/lib') {
include(name: '**.jar')
}
}
}
This works way too good. Even though nothing defines this ant task as dependency (neither in ant nor in Gradle), it is always executed first, even for a simple gradlew tasks. Why is that and how can I avoid this? (The dependency check is quite slow.)
This is a very common confusion with Gradle. In your example above you are executing the Ant tasks during project configuration. What you really intended was for it to run during task execution. To fix this, your execution logic should be placed within a task action, either by using a doLast {...} configuration block or using the left shift (<<) operator.
task checkDependencies << {
// put your execution logic here
}
See the Gradle docs for more information about the Gradle build lifecycle.
In Grails 2.0.4, I'm trying to write a controller unit test which invokes the static SpringSecurityUtils.reauthenticate. The test returns a NullPointerException on that invocation. In a debugger, I can see that none of the Groovy dynamic properties (declaredMethods,etc.) of SpringSecurityUtils are populated.
I do note that when running the tests, the "Configuring Spring Security Core" log message is emitted after the unit-test failure. Here is a sample test:
class ReproTest {
void testSpringSecurityUtils() {
String.valueOf(true) // OK: a public final class from the JDK
URLUtils.isRelativeURL("foo") // OK: a class from another plugin
SpringSecurityUtils.reauthenticate "user", "pw" // fails, NPE
}
}
My initial reaction is that maybe plugins aren't accessible during unit tests, but if so, why is the URLUtils call working? And why does the test get "far enough" to initialize the plugin, but after the tests have completed?
For a unit test the container isn't starting up. No Spring injection or "grails goodness" is happening. You see in the logs that the plugin is initializing after the unit tests run, because it [container] does start for the integration tests. If you want to test the SpringSecurityUtils, although guessing it is already tested properly in the plugin, you would want to write an integration test.