com.qmetry.qaf.automation.step.client.ScenarioFactory.getTestsFromFile() threw an exception - qaf

Followed steps outlined here: https://qmetry.github.io/qaf/qaf-2.1.14/gherkin_client.html
<test name="Gherkin-QAF-Test">
<parameter name="step.provider.pkg" value="com.cucumber.steps" />
<parameter name="scenario.file.loc" value="resources/features/component/test/smoke.feature" />
<classes>
<class name="com.qmetry.qaf.automation.step.client.gherkin.GherkinScenarioFactory" />
</classes>
</test>
Added feature file:
#Web
Feature: Google Search
#Smoke
Scenario: Search InfoStrech
Given I am on Google Search Page
When I search for "git qmetry"
Then I get at least 5 results
And it should have "QMetry Automation Framework" in search results
Added steps in java
#QAFTestStep(description = "I am on Google Search Page")
public void step1() {
System.out.println("I am on Google Search Page");
}
#QAFTestStep(description = "I search for {0}")
public void iSearchFor(String s) {
System.out.println("I search for " + s);
}
#QAFTestStep(description="I get at least {num} results")
public void iGet_inSearchResults(Integer n) {
System.out.printf("I get at least %d results\n", n);
}
#QAFTestStep(description="it should have {0} in search results")
public void itShouldHave_inSearchResults(String s) {
System.out.printf("it should have %s in search results\n", s);
}
Ran xml file as TestNG, getting error below:
The factory method class com.qmetry.qaf.automation.step.client.ScenarioFactory.getTestsFromFile() threw an exception
org.testng.TestNGException:
The factory method class com.qmetry.qaf.automation.step.client.ScenarioFactory.getTestsFromFile() threw an exception
at org.testng.internal.FactoryMethod.invoke(FactoryMethod.java:197)
at org.testng.internal.TestNGClassFinder.processFactory(TestNGClassFinder.java:223)
at org.testng.internal.TestNGClassFinder.processMethod(TestNGClassFinder.java:179)
at org.testng.internal.TestNGClassFinder.processClass(TestNGClassFinder.java:171)
at org.testng.internal.TestNGClassFinder.<init>(TestNGClassFinder.java:121)
at org.testng.TestRunner.initMethods(TestRunner.java:370)
at org.testng.TestRunner.init(TestRunner.java:271)
at org.testng.TestRunner.init(TestRunner.java:241)
at org.testng.TestRunner.<init>(TestRunner.java:192)
at org.testng.remote.support.RemoteTestNG6_12$1.newTestRunner(RemoteTestNG6_12.java:33)
at org.testng.remote.support.RemoteTestNG6_12$DelegatingTestRunnerFactory.newTestRunner(RemoteTestNG6_12.java:66)
at org.testng.SuiteRunner$ProxyTestRunnerFactory.newTestRunner(SuiteRunner.java:713)
at org.testng.SuiteRunner.init(SuiteRunner.java:260)
at org.testng.SuiteRunner.<init>(SuiteRunner.java:198)
at org.testng.TestNG.createSuiteRunner(TestNG.java:1295)
at org.testng.TestNG.createSuiteRunners(TestNG.java:1273)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1128)
at org.testng.TestNG.runSuites(TestNG.java:1049)
at org.testng.TestNG.run(TestNG.java:1017)
at org.testng.remote.AbstractRemoteTestNG.run(AbstractRemoteTestNG.java:114)
at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:251)
at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:77)
Caused by: java.lang.IllegalArgumentException: wrong number of arguments
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.FactoryMethod.invoke(FactoryMethod.java:167)
... 21 more
something else I've noticed. After an update to "Scenario" in feature file next to each line of conditions I see warning with text:
Step '********' does not have a matching glue code

To answer original question, easiest way is to start using maven template or ANT template.
Regarding your question in comment, of using testng annotations, when using BDD you can move that code into respective testng listener. For example, method with Before/AfterSuite annotation can be moved to Suite listener and Before/AfterMethod can be moved to Method Invocation listener.
When you are using QAF you may not required much of the code for driver management because qaf provides inbuilt feature of thread safe driver and resource management. You can take benefit of that with driver and element listeners and locator repository features. It is highly configurable, for example you can set property selenium.singletone to specify driver instance scope. Possible value can be Tests (testng xml test) or Methods (test mtehod) or Groups.

Related

No such field found: field java.lang.String sinput error when accessing cppcheck plugin classes

]I am a junior dev trying to lear about Jenkins, I have been learning on my own for a couple of months. Currently I have a pipeline (just for learning purposes) which runs static analysis on a folder, and then publish it, I have been able to send a report through email using jelly templates, from there I realized it is posbile to instantiate the classes of a plugin to use its methods so I went to the cppcheck javadoc and did some trial and error so I can get some values of my report and then do something else with them, so I had something like this in my pipeline:
pipeline {
agent any
stages {
stage('analysis') {
steps {
script{
bat'cppcheck "E:/My_project/Source/" --xml --xml-version=2 . 2> cppcheck.xml'
}
}
}
stage('Test'){
steps {
script {
publishCppcheck pattern:'cppcheck.xml'
for (action in currentBuild.rawBuild.getActions()) {
def name = action.getClass().getName()
if (name == 'org.jenkinsci.plugins.cppcheck.CppcheckBuildAction') {
def cppcheckaction = action
def totalErrors = cppcheckaction.getResult().report.getNumberTotal()
println totalErrors
def warnings = cppcheckaction.getResult().statistics.getNumberWarningSeverity()
println warnings
}
}
}
}
}
}
}
which output is:
[Pipeline] echo
102
[Pipeline] echo
4
My logic (wrongly) tells me that if I can access to the report and statistics classes like that and uses their methods getNumberTotal() and getNumberWarningSeverity() respectively, therefore I should be able to also access the DiffState class in the same way and use the valueOf() method to get an enum of the new errors. But adding this to my pipeline:
def nueva = cppcheckaction.getResult().diffState.valueOf(NEW)
println nueva
Gives me an error:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: No such field found: field org.jenkinsci.plugins.cppcheck.CppcheckBuildAction diffState
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.unclassifiedField(SandboxInterceptor.java:425)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:409)
...
I can see in the javadoc there is a diffState class with a valueOf() method, but I cannot access to it is therre any other way to get the new errors between the last build and the current one?
I see 2 issues that could be causing this:
CppcheckResult doesn't have a member variable diffState so you can't access it obviously
If you check the javadoc of CppcheckResult the class does have:
private CppcheckReport report;
public CppcheckStatistics getReport()
and
private CppcheckStatistics statistics;
public CppcheckStatistics getStatistics()
there is no member (and getter method) for diffState so maybe try to call:
/**
* Get differences between current and previous statistics.
*
* #return the differences
*/
public CppcheckStatistics getDiff(){
my suggestion: cppcheckaction.getResult().getDiff().valueOf(NEW). Furthermore CppcheckWorkspaceFile does have a method getDiffState().
Please have a look at the script approval of your Jenkins (see here).
The syntax error might appear because Jenkins (Groovy Sandbox) blocks the execution of an (for the Jenkins) "unknown" and potential dangerous method.
Jenkins settings - Script Approval - Approve your blocked method

Python execution via the Polyglot API always needs full System IO access

I am trying to execute some very straightforward piece of python code via the polyglot API. When trying to run even simple tests such as the following:
#Test
public void printSomTextPythonTest() throws ScriptExecutionException, IOException {
String code = "print('Python will print this to the console!')";
String[] supportedLangs = { "js", "python", "R" };
Context testContext = Context.newBuilder(supportedLangs)
.allowAllAccess(false)
.allowHostAccess(false)
.allowHostClassLoading(false)
.allowIO(false)
.allowNativeAccess(false)
.allowCreateThread(false)
.build();
Source source = Source.newBuilder("python", code, "pyScript").build();
Value result = testContext.eval(source);
testContext.close();
}
Or:
#Test
public void setAVariablePythonTest() throws ScriptExecutionException, IOException {
String code = "someNumber = 11";
String[] supportedLangs = { "js", "python", "R" };
Context testContext = Context.newBuilder(supportedLangs)
.allowAllAccess(false)
.allowHostAccess(false)
.allowHostClassLoading(false)
.allowIO(false)
.allowNativeAccess(false)
.allowCreateThread(false)
.build();
Source source = Source.newBuilder("python", code, "pyScript").build();
Value result = testContext.eval(source);
testContext.close();
}
I get this error (stack trace below):
org.graalvm.polyglot.PolyglotException: java.lang.SecurityException: Operation is not allowed for: /code/polyglot-test
If I change allowIO to true when building the Context, the code runs fine and gives the expected result. I have also tried it with more complex code with the same results.
Why is IO access necessary for Python code to be executed?
Equivalent code written and executed in JS does not need the allowIO to be set to true, so it seems to me to be a Python specific thing.
Thanks for your help.
UPDATE
I have been testing with R as a guest language too. Running the following test:
#Test
public void helloWorldRTest() throws ScriptExecutionException, IOException {
String code = "print(\"R will print this to the console!\")";
String[] supportedLangs = { "js", "python", "R" };
Context testContext = Context.newBuilder(supportedLangs)
.allowAllAccess(false)
.allowHostAccess(false)
.allowHostClassLoading(false)
.allowIO(false)
.allowNativeAccess(false)
.allowCreateThread(false)
.build();
Source source = Source.newBuilder("R", code, "rScript").build();
Value result = testContext.eval(source);
testContext.close();
}
I get the following error:
FastR unexpected failure: error loading libR from: /Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/R/lib/libR.dylib.
If running on the NFI backend, did you provide the location of libtrufflenfi.so as the value of the system property 'truffle.nfi.library'?
The current value is 'null'.
Is the OpenMP runtime library (libgomp.so) present on your system? This library is, e.g., typically part of the GCC package.
Details: Access to native code is not allowed by the host environment.
However, by setting "allowNativeAccess" to true, the code runs fine without errors.
Do different languages need different types of access priviliges to run?
In my use case I am trying to sandbox the execution as much as possible. I want the scripts, regardless of the language they are written in to only have access to the data that is given to them.
Scripts running in the guest languages should not have any access to the host system. Is this achievable?
Partial stack trace (if needed for debugging I can supply the full stack trace):
org.graalvm.polyglot.PolyglotException: java.lang.SecurityException: Operation is not allowed for: /code/polyglot-test
at com.oracle.truffle.api.vm.FileSystems$DeniedIOFileSystem.forbidden(FileSystems.java:489)
at com.oracle.truffle.api.vm.FileSystems$DeniedIOFileSystem.checkAccess(FileSystems.java:367)
at com.oracle.truffle.api.TruffleFile.checkAccess(TruffleFile.java:983)
at com.oracle.truffle.api.TruffleFile.exists(TruffleFile.java:102)
at com.oracle.graal.python.builtins.modules.PosixModuleBuiltins$StatNode.stat(PosixModuleBuiltins.java:404)
at com.oracle.graal.python.builtins.modules.PosixModuleBuiltins$StatNode.doStat(PosixModuleBuiltins.java:397)
at com.oracle.graal.python.builtins.modules.PosixModuleBuiltinsFactory$StatNodeFactory$StatNodeGen.executeAndSpecialize(PosixModuleBuiltinsFactory.java:855)
at com.oracle.graal.python.builtins.modules.PosixModuleBuiltinsFactory$StatNodeFactory$StatNodeGen.execute(PosixModuleBuiltinsFactory.java:807)
at com.oracle.graal.python.nodes.function.BuiltinFunctionRootNode$BuiltinBinaryCallNode.execute(BuiltinFunctionRootNode.java:103)
at com.oracle.graal.python.nodes.function.BuiltinFunctionRootNode.execute(BuiltinFunctionRootNode.java:229)
at stat(Unknown)
at stat(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-graalpython/posix.py:51:2230-2247)
at _path_stat(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:82:2759-2772)
at _path_is_mode_type(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:88:2901-2916)
at _path_isdir(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:103:3245-3278)
at path_hook_for_FileFinder(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:1333:50397-50413)
at PathFinder._path_hooks(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:1083:40517-40526)
at PathFinder._path_importer_cache(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:1107:41279-41299)
at PathFinder._get_spec(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:1135:42391-42421)
at PathFinder.find_spec(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap_external.py:1166:43686-43722)
at _find_spec(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap.py:892:28932-28960)
at _find_and_load_unlocked(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap.py:953:31192-31213)
at _find_and_load(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap.py:968:31701-31738)
at _gcd_import(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap.py:985:32285-32317)
at import(../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-python/3/importlib/_bootstrap.py:1066:35366-35382)
at (../../../../../Library/Java/JavaVirtualMachines/graalvm-ee-1.0.0-rc4/Contents/Home/jre/languages/python/lib-graalpython/builtins_patches.py:48:2220-2224)
at org.graalvm.polyglot.Context.eval(Context.java:313)
at bolt.tests.BoltEngineGraalTest.helloWorldPythonTest(BoltEngineGraalTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)
Original Internal Error:
java.lang.SecurityException: Operation is not allowed for: /code/polyglot-test
at com.oracle.truffle.api.vm.FileSystems$DeniedIOFileSystem.forbidden(FileSystems.java:489)
at com.oracle.truffle.api.vm.FileSystems$DeniedIOFileSystem.checkAccess(FileSystems.java:367)
...

CombineFn Dataflow - Step not in order, creating null pointer

I am a newbie in dataflow, please pardon if I made a newbie mistake,
I recently use dataflow/beam to process several data from pubsub, and I am using cloud-dataflow-nyc-taxi-tycoon as a starting point, but I upgrade it to sdk 2.2.0 to make it work with Big Table. I simulate it using http cloud function that send a single data to pubsub so the dataflow can ingest it, using below code
.apply("session windows on rides with early firings",
Window.<KV<String, TableRow>>into(
new GlobalWindows())
.triggering(
Repeatedly.forever(AfterPane.elementCountAtLeast(1))
)
.accumulatingFiredPanes()
.withAllowedLateness(Duration.ZERO))
.apply("group by", Combine.perKey(new LatestPointCombine()))
.apply("prepare to big table",
MapElements.via(new SimpleFunction<KV<String,TableRow>,TableRow >() {
#Override
public TableRow apply(KV<String, TableRow> input) {
TableRow tableRow = null;
try{
tableRow=input.getValue();
....
}
catch (Exception ex){
ex.printStackTrace();
}
return tableRow;
}
}))
.apply....
But it gives me an error at phase "group by"/CombineFn, after "session windows on rides with early firings", here is the logs from stackdriver
1. I create accumulator
2. I addinput
3. I mergeaccumulators
4. I extractoutput
5. I pre mutation_transform
6. W Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
7. I mergeaccumulators
8. I create accumulator
9. E Uncaught exception:
10. E Execution of work for S0 for key db2871226f7cec594ebd976e6758ac7e failed. Will retry locally.
11. I Memory is used/total/max = 105/365/4949 MB, GC last/max = 1.00/1.00 %, #pushbacks=0, gc thrashing=false
12. I create accumulator
13. I addinput
14. I mergeaccumulators
15. I extractoutput
16. I pre mutation_transform
17. I mergeaccumulators
18. I create accumulator
19. E Uncaught exception:
20. E Execution of work for S0 for key db2871226f7cec594ebd976e6758ac7e failed. Will retry locally.
21. I create accumulator
...
My questions are :
A. What I dont understand is after step 4, (extract output), why the dataflow mergeaccumulator method called first (line 7.) and later on the create accumulator were called (line 8.) here is the mergeAccumulator method I wrote
public RidePoint mergeAccumulators(Iterable<RidePoint> latestList) {
//RidePoint merged = createAccumulator();
RidePoint merged=new RidePoint();
LOG.info("mergeaccumulators");
for (RidePoint latest : latestList) {
if (latest==null){
LOG.info("latestnull");
}else
if (merged.rideId == null || latest.timestamp > merged.timestamp){
LOG.info(latest.timestamp + " latest " + latest.rideId);
merged = new RidePoint(latest);
}
}
return merged;
}
B. It seems the data is null, and I dont know what caused it, but it reach at the end of the pipeline, elements added in "session windows on rides with early firings" is showing 1 element added, but below that "group by" phase, ... gives 52 elements added,
The Detailed Uncaught exception that shown in the log, looks like this :
(90c7caea3f5d5ad4): java.lang.NullPointerException: in com.google.codelabs.dataflow.utils.RidePoint in string null of string in field status of com.google.codelabs.dataflow.utils.RidePoint
org.apache.avro.reflect.ReflectDatumWriter.write(ReflectDatumWriter.java:161)
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:62)
org.apache.beam.sdk.coders.AvroCoder.encode(AvroCoder.java:308)
org.apache.beam.sdk.coders.Coder.encode(Coder.java:143)
com.google.cloud.dataflow.worker.WindmillStateInternals$WindmillBag.persistDirectly(WindmillStateInternals.java:575)
com.google.cloud.dataflow.worker.WindmillStateInternals$SimpleWindmillState.persist(WindmillStateInternals.java:320)
com.google.cloud.dataflow.worker.WindmillStateInternals$WindmillCombiningState.persist(WindmillStateInternals.java:952)
com.google.cloud.dataflow.worker.WindmillStateInternals.persist(WindmillStateInternals.java:216)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.flushState(StreamingModeExecutionContext.java:513)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext.flushState(StreamingModeExecutionContext.java:363)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1071)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:133)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$8.run(StreamingDataflowWorker.java:841)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
org.apache.avro.specific.SpecificDatumWriter.writeString(SpecificDatumWriter.java:67)
org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:128)
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75)
org.apache.avro.reflect.ReflectDatumWriter.write(ReflectDatumWriter.java:159)
org.apache.avro.generic.GenericDatumWriter.writeField(GenericDatumWriter.java:166)
org.apache.avro.specific.SpecificDatumWriter.writeField(SpecificDatumWriter.java:90)
org.apache.avro.reflect.ReflectDatumWriter.writeField(ReflectDatumWriter.java:191)
org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:156)
org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:118)
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:75)
org.apache.avro.reflect.ReflectDatumWriter.write(ReflectDatumWriter.java:159)
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:62)
org.apache.beam.sdk.coders.AvroCoder.encode(AvroCoder.java:308)
org.apache.beam.sdk.coders.Coder.encode(Coder.java:143)
com.google.cloud.dataflow.worker.WindmillStateInternals$WindmillBag.persistDirectly(WindmillStateInternals.java:575)
com.google.cloud.dataflow.worker.WindmillStateInternals$SimpleWindmillState.persist(WindmillStateInternals.java:320)
com.google.cloud.dataflow.worker.WindmillStateInternals$WindmillCombiningState.persist(WindmillStateInternals.java:952)
com.google.cloud.dataflow.worker.WindmillStateInternals.persist(WindmillStateInternals.java:216)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.flushState(StreamingModeExecutionContext.java:513)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext.flushState(StreamingModeExecutionContext.java:363)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1071)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:133)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$8.run(StreamingDataflowWorker.java:841)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Your question has many parts to it. But to start, here are some recommendations for debugging:
Don't swallow exceptions. Currently in your "prepare to big table" logic you have: catch (Exception ex){ ex.printStackTrace(); }. This hides the exception and is causing null elements to be returned from the function. It's better to understand and fix the exception here rather dealing with invalid data later.
Validate with the DirectRunner first. Make sure that your pipeline runs correctly on your machine using the Beam DirectRunner. This is the easiest way to understand and fix issues with the Beam model. You can run from the commandline or your favorite IDE and debugger. Then if your pipeline works on the DirectRunner but not on Dataflow, you know that there is a Dataflow-specific issue.
To touch on your specific questions:
A. What I dont understand is after step 4, (extract output), why the
dataflow mergeaccumulator method called first (line 7.) and later on
the create accumulator were called (line 8.)
Your code uses Combine.perKey, which will group elements by key value. So each unique key will cause an accumulator to be created. Dataflow also applies a set of optimizations which can parallelize and reorder independent operations, which could explain what you're seeing.
B. It seems the data is null, and I don;t know what caused it
The null values are likely those that hit an exception in your prepare to big table logic.
I'm not exactly sure what you mean with the output counts because I don't quite understand your pipeline topology. For example, your LatestPointCombine logic seems to output type RidePoint, but the "prepare to big table" function takes in a String. If you are still having trouble after following the suggestions above, you can post a Dataflow job_id and I can help investigate further.

TestNG + Cucumber JVM parallel execution

I'm tryting to run our Cucumber JVM tests by few threads in parallel.
I'm using standart TastNG approach to do it (via suite XML file)
My xml file is:
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="BDD" parallel="methods" thread-count="3" data-provider-thread-count="3">
<test name="BDD">
<classes>
<class name="com.tests.bdd.SimpleBDDTests"></class>
</classes>
</test>
</suite>
My test class is:
#CucumberOptions(features = "src/test/java/com/tests/bdd/simpleFeatures")
public class SimpleBDDTests {
private TestNGCucumberRunner tcr;
#BeforeClass(alwaysRun = true)
public void beforeClass() throws Exception {
tcr = new TestNGCucumberRunner(this.getClass());
}
#AfterClass(alwaysRun = true)
public void afterClass() {
tcr.finish();
}
#Test(dataProvider = "features")
public void feature(CucumberFeatureWrapper cucumberFeature) {
tcr.runCucumber(cucumberFeature.getCucumberFeature());
}
#DataProvider(parallel = true)
public Object[][] features() {
return tcr.provideFeatures();
}}
My feature files are like:
Feature: First test
#sanity
Scenario: First simple test
Given Base check step
I have 4 feature files, which are defines the same scenarios with only one step - Given Base check step
When these features are executed one by one, it works fine, but when i try to run them in parallel, everything gets broken.
Almost all of these featres marked as failed with the following exception:
A scoping block is already in progress
java.lang.IllegalStateException: A scoping block is already in progress
at cucumber.runtime.java.guice.impl.SequentialScenarioScope.checkState(SequentialScenarioScope.java:64)
at cucumber.runtime.java.guice.impl.SequentialScenarioScope.enterScope(SequentialScenarioScope.java:52)
at cucumber.runtime.java.guice.impl.GuiceFactory.start(GuiceFactory.java:34)
at cucumber.runtime.java.JavaBackend.buildWorld(JavaBackend.java:123)
at cucumber.runtime.Runtime.buildBackendWorlds(Runtime.java:141)
at cucumber.runtime.model.CucumberScenario.run(CucumberScenario.java:38)
at cucumber.runtime.model.CucumberFeature.run(CucumberFeature.java:165)
at cucumber.api.testng.TestNGCucumberRunner.runCucumber(TestNGCucumberRunner.java:63)
I understand that it might be happen because of multi-thread calls to the same step - Given Base check step
So my question is how can i fix that? How can i run these tests in parallel?
PS: I know that it should be possible to do it by JUnit + Maven surefire plugin, but it is not applicable for current project, we need to achieve that goal by TestNG.
Thanks.

Grails + CXF + secureServiceFactory

When I try run this script to secure my web services on Grails / CXF client I get
"Cannot invoke method getInInterceptors() on null object" on secureServiceFactory
Does secureServiceFactory need to be set somewhere else?
Any ideas:
Code :
class BootStrap {
def secureServiceFactory
def init = { servletContext ->
Map<String, Object> inProps = [:]
inProps.put(WSHandlerConstants.ACTION, WSHandlerConstants.USERNAME_TOKEN);
inProps.put(WSHandlerConstants.PASSWORD_TYPE, WSConstants.PW_TEXT);
Map<QName, Validator> validatorMap = new HashMap<QName, Validator>();
validatorMap.put(WSSecurityEngine.USERNAME_TOKEN, new UsernameTokenValidator() {
#Override
protected void verifyPlaintextPassword(org.apache.ws.security.message.token.UsernameToken usernameToken, org.apache.ws.security.handler.RequestData data)
throws org.apache.ws.security.WSSecurityException {
if(data.username == "wsuser" && usernameToken.password == "secret") {
println "username and password are correct!"
} else {
println "username and password are NOT correct..."
throw new WSSecurityException("user and/or password mismatch")
}
}
});
inProps.put(WSS4JInInterceptor.VALIDATOR_MAP, validatorMap);
secureServiceFactory.getInInterceptors().add(new WSS4JInInterceptor(inProps))
}
Not sure this is a total answer, but, I receive the same errors and I understand that the cxf plugin is meant to wire up service factories that will match the name of your exposed service. I have verified that out of the box, running the grails-cxf plugin using grails run-app the application works. however, by executing grails war on the project creates a war that when deployed to tc server [vfabric-tc-server-developer-2.9.4.RELEASE] tomcat 7 [tomcat-7.0.47.A.RELEASE], this error occurs.
It is also useful to note that out of the box, as the plugin author has noted in other references [http://www.christianoestreich.com/2012/04/grails-cxf-interceptor-injection/] the generated war won't work unless you change test('org.apache.ws.security:wss4j:1.6.7') to compile('org.apache.ws.security:wss4j:1.6.7') and I note that I was unable to make that work, I had to use compile('org.apache.ws.security:wss4j:1.6.9')
Unfortunately, after surpassing this, I run into a third error when deploying the war that doesn't occur in grails run-app:
22-Aug-2014 11:46:05.062 SEVERE [tomcat-http--1] org.apache.catalina.core.StandardWrapperValve.invoke Allocate exception for servlet CxfServlet
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'cxf' is defined
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanDefinition(DefaultListableBeanFactory.java:641)
at org.springframework.beans.factory.support.AbstractBeanFactory.getMergedLocalBeanDefinition(AbstractBeanFactory.java:1159)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:282)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:273)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:979)
at org.apache.cxf.transport.servlet.CXFServlet.loadBus(CXFServlet.java:75)
I'll continue looking at it, but perhaps this war isn't meant to really deploy, but is more meant just for development of the plugin itself. however, if that is the case, it would still be better to work in TC because then we can leverage the code in our own projects with confidence.

Resources