Exception retrieving results in jena resultSet - jena

I'm having some problems when querying DBpedia throught Jena. The exception is thrown when iterating over the resultSet in the nextSolution method. Here is the code:
ResultSet results = throwQuery(query);
ArrayList<Movies> movs = new ArrayList<Movies>();
//try {
while (results.hasNext()) {
try{
QuerySolution q = results.nextSolution();
Movies m = new Movies();
m.setUrl(q.get("film_url").toString());
RDFNode node = q.get("film_label");
// Set a default title
String title = "";
if (node != null) {
// We delete the "#en" part that indicates that the label is in
// english
title = node.toString();
int ind = title.indexOf("#en");
title = title.substring(0, ind);
}
m.setTitle(title);
node = q.get("image_url");
// Set a default image
String image = "http://4.bp.blogspot.com/_rY0CJheAaRM/SuYJcVOqKbI/AAAAAAAAA2Y/abClDm72TuY/s320/NoCoverAvailable.png";
if (node != null) {
// For some reason the image link retrieved from dbpedia is
// broken. Here we fix it
image = node.toString();
int ind = image.indexOf("common");
image = image.substring(0, ind) + "en" + image.substring(ind + 7);
}
m.setImageurl(image);
movs.add(m);
}
catch(Exception e){
System.err.println("Error catched: " + e.getMessage());
}
}
return movs;
Where throwQuery
private final static String SERVICE = "http://dbpedia.org/sparql";
private static ResultSet throwQuery(String q) {
Query qFactory = QueryFactory.create(q);
QueryExecution qe = QueryExecutionFactory.sparqlService(SERVICE, qFactory);
ResultSet results = null;
try {
results = qe.execSelect();
} catch (QueryExceptionHTTP e) {
System.out.println(e.getMessage());
System.out.println(SERVICE + " is DOWN");
} finally {
qe.close();
return results;
}
}
And the testing query
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?film_label ?image_url ?film_url
WHERE {
?film_url rdf:type <http://dbpedia.org/ontology/Film> .
OPTIONAL{
?film_url rdfs:label ?film_label
FILTER (LANG(?film_label) = 'en')
}
OPTIONAL{
?film_url foaf:depiction ?image_url
}
FILTER regex(str(?film_url), "hola","i")
}
ORDER BY ?film_url
When the program starts iterating, everything goes well until arrive to the value Nicholas Nickleby (2002 film) then I get this exception:
com.hp.hpl.jena.sparql.resultset.ResultSetException: XMLStreamException: Unexpected EOF in start tag
at [row,col {unknown-source}]: [67,116]
at com.hp.hpl.jena.sparql.resultset.XMLInputStAX$ResultSetStAX.staxError(XMLInputStAX.java:539 )
at com.hp.hpl.jena.sparql.resultset.XMLInputStAX$ResultSetStAX.hasNext(XMLInputStAX.java:236)
at client.DBPediaConnector.getMovie(DBPediaConnector.java:67)
at customServices.MoviesService.searchInsertMovie(MoviesService.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.glassfish.ejb.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1052)
at org.glassfish.ejb.security.application.EJBSecurityManager.invoke(EJBSecurityManager.java:1124)
at com.sun.ejb.containers.BaseContainer.invokeBeanMethod(BaseContainer.java:5388)
at com.sun.ejb.EjbInvocation.invokeBeanMethod(EjbInvocation.java:619)
at com.sun.ejb.containers.interceptors.AroundInvokeChainImpl.invokeNext(InterceptorManager.java:800)
at com.sun.ejb.EjbInvocation.proceed(EjbInvocation.java:571)
at com.sun.ejb.containers.interceptors.SystemInterceptorProxy.doAround(SystemInterceptorProxy.java:162)
at com.sun.ejb.containers.interceptors.SystemInterceptorProxy.aroundInvoke(SystemInterceptorProxy.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.sun.ejb.containers.interceptors.AroundInvokeInterceptor.intercept(InterceptorManager.java:861)
at com.sun.ejb.containers.interceptors.AroundInvokeChainImpl.invokeNext(InterceptorManager.java:800)
at com.sun.ejb.containers.interceptors.InterceptorManager.intercept(InterceptorManager.java:370)
at com.sun.ejb.containers.BaseContainer.__intercept(BaseContainer.java:5360)
at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:5348)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:214)
... 47 more
Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in start tag
at [row,col {unknown-source}]: [67,116]
at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:677)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:1034)
at com.ctc.wstx.sr.StreamScanner.getNextChar(StreamScanner.java:785)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2790)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1065)
at com.hp.hpl.jena.sparql.resultset.XMLInputStAX$ResultSetStAX.getOneSolution(XMLInputStAX.java:435)
at com.hp.hpl.jena.sparql.resultset.XMLInputStAX$ResultSetStAX.hasNext(XMLInputStAX.java:232)
... 71 more
Seems for me like an internal error from Jena, but I have no idea. Am I doing something wrong? How can I solve this?

Please give a complete, minimal example. This is quite long.
DBpedia is returning broken XML for the results, possibly because the query is taking a long time to execute and the timeout is triggered. It seems to be a moderately slow query.
Try adding &timeout=60000 to query URL of 'http://dbpedia.org/sparql&timeout=60000', if your version of Jena is new enough. This may not be long enough. There is a hard internal limit on dbpedia which can not be overridden.
Executing at a different time of day may also help.
It may also be because corrupt XML is being returned. Execute the query at the DBpedia UI and get the XML results to check this.

Related

Google.Apis.YouTube.v3 compiling channel statistics fail "The uri string is too long"

I'm trying to compile channel statistics for a list of channels and for the first page it works, but when I invoke the next page using the token it gives me an error that the URI string is too long.
I'm using .NET core 2.2 and Google.Apis.YouTube.v2 V1.39.0.1572. And the code I use is really simple:
var youtubeService = new YouTubeService(new BaseClientService.Initializer()
{
ApiKey = Startup.Configuration["YTConfigurations:ApiKey"],
ApplicationName = this.GetType().ToString()
});
ChannelsResource.ListRequest channelsListRequest = youtubeService.Channels.List("snippet,statistics,brandingSettings,topicDetails");
channelsListRequest.Id = string.Join(",", channelEntities.Select(v => v.YTChannelId));
channelsListRequest.MaxResults = 50;
do
{
pageCounter++;
channelListResponse = channelsListRequest.Execute();//here is the error after page 1
foreach (var listResult in channelListResponse.Items)
{
channelEntity = channelEntities.FirstOrDefault(v => v.YTChannelId == listResult.Id);
channelEntity = Mapper.Map(listResult, channelEntity);
_repository.UpdateChannel(channelEntity);
}
if (channelListResponse.NextPageToken != null)
{
channelsListRequest.PageToken = channelListResponse.NextPageToken;
}
} while (channelListResponse.Items.Count == 50 && channelListResponse.NextPageToken != null);
When I execute this is what I get:
System.UriFormatException: Invalid URI: The Uri string is too long.
at System.UriHelper.EscapeString(String input, Int32 start, Int32 end, Char[] dest, Int32& destPos, Boolean isUriString, Char force1, Char force2, Char rsvd)
at System.Uri.EscapeDataString(String stringToEscape)
at Google.Apis.Requests.RequestBuilder.<>c.<BuildUri>b__25_0(KeyValuePair`2 x) in C:\Apiary\2019-05-01.11-08-18\Src\Support\Google.Apis.Core\Requests\RequestBuilder.cs:line 108
at System.Linq.Enumerable.SelectListIterator`2.ToArray()
at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
at Google.Apis.Requests.RequestBuilder.BuildUri() in C:\Apiary\2019-05-01.11-08-18\Src\Support\Google.Apis.Core\Requests\RequestBuilder.cs:line 107
at Google.Apis.Requests.RequestBuilder.CreateRequest() in C:\Apiary\2019-05-01.11-08-18\Src\Support\Google.Apis.Core\Requests\RequestBuilder.cs:line 332
at Google.Apis.Requests.ClientServiceRequest`1.CreateRequest(Nullable`1 overrideGZipEnabled) in C:\Apiary\2019-05-01.11-08-18\Src\Support\Google.Apis\Requests\ClientServiceRequest.cs:line 257
at Google.Apis.Requests.ClientServiceRequest`1.ExecuteUnparsedAsync(CancellationToken cancellationToken) in C:\Apiary\2019-05-01.11-08-18\Src\Support\Google.Apis\Requests\ClientServiceRequest.cs:line 229
at Google.Apis.Requests.ClientServiceRequest`1.Execute() in C:\Apiary\2019-05-01.11-08-18\Src\Support\Google.Apis\Requests\ClientServiceRequest.cs:line 167
at ChannelHarvester.Controllers.ChannelHarvester.extractStats(ICollection`1 channelEntities) in C:\Guayaba Projects\YT_ChannelHarvester\ChannelHarvester\Controllers\ChannelHarvester.cs:line 106
Am I doing something wrong? Please let me know if there is something I can fix on my end.
Thanks!
Well, I'm really sorry to have bothered you guys. I found what it was, in the following code I thought I was joining only 50 IDs, but apparently in some occasions I was sending a LOT more.
channelsListRequest.Id = string.Join(",", channelEntities.Select(v => v.YTChannelId));
So I guess we can close this

Transaction Not Working As Expected in Custom Procedure

I'm having issues when trying to issue commits using a custom procedure. I am selecting data from an external datasource, and then transforming it into a graph DB. I am trying to make each row from the external datasource it's own transaction, but if any row fails, all the transactions fail. Please let me know where I've gone wrong. Here are snippets of my code:
#Context
public GraphDatabaseService db;
#SuppressWarnings("unchecked")
#Description("CALL myProc")
#Procedure(name = "myProc", mode = Mode.WRITE)
public void myProc() {
// Get the last time the procedure succeeded from the System node
try {
boolean processAgain = true;
int startRow = 1, endRow = 10000, batchSize = 10000;
int total = 0, succeeded = 0, failed = 0;
while (processAgain) {
result = db.execute(new StringBuilder("CALL apoc.load.jdbc(\"dbAlias\", \"SQL GOES HERE\")").toString());
while (result.hasNext()) {
total++;
Map<String, Object> row = result.next();
for (String key : result.columns()) {
Map<String, Object> currentRow = ((Map<String, Object>) row.get(key));
Transaction trans = db.beginTx();
log.info("Beginning transaction");
try {
// Do stuff here (i.e. db.execute...)
trans.success();
succeeded++;
} catch (Exception e) {
failed++;
log.error("Error message", e);
} finally {
trans.close();
}
}
} // End results while loop
if (total != 0) {
log.info(new StringBuilder("Processed ").append(total - startRow + 1).append(" rows").toString());
}
processAgain = total != 0 && total % endRow == 0;
startRow += batchSize;
endRow += batchSize;
} // End processAgain while loop
} catch (Exception e) {
log.error("Error message", e);
}
}
UPDATE: And here's the console output
ERROR (-v for expanded information):
TransactionFailureException: Transaction was marked as successful, but unable to commit transaction so rolled back.
org.neo4j.graphdb.TransactionFailureException: Transaction was marked as successful, but unable to commit transaction so rolled back.
at org.neo4j.kernel.impl.coreapi.TopLevelTransaction.close(TopLevelTransaction.java:100)
at org.neo4j.shell.kernel.apps.TransactionProvidingApp.execute(TransactionProvidingApp.java:250)
at org.neo4j.shell.kernel.apps.cypher.Start.execute(Start.java:82)
at org.neo4j.shell.impl.AbstractAppServer.interpretLine(AbstractAppServer.java:126)
at org.neo4j.shell.kernel.GraphDatabaseShellServer.interpretLine(GraphDatabaseShellServer.java:105)
at org.neo4j.shell.impl.RemotelyAvailableServer.interpretLine(RemotelyAvailableServer.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.neo4j.internal.kernel.api.exceptions.TransactionFailureException: Transaction rolled back even if marked as successful
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.failOnNonExplicitRollbackIfNeeded(KernelTransactionImplementation.java:599)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.closeTransaction(KernelTransactionImplementation.java:541)
at org.neo4j.internal.kernel.api.Transaction.close(Transaction.java:189)
at org.neo4j.kernel.impl.coreapi.TopLevelTransaction.close(TopLevelTransaction.java:78)
... 22 more
Can someone please tell me what I'm doing wrong? Thanks.

Slow performance while saving data in neo4j Enterprise Version (trail version)

Build 3 Node cluster in testing environment and used Neo4j-JDBC connection to save JSON data into Neo4j.
When creating just 2000 nodes and 2000 relations through JSON statistics are: Total time to save topology data in Neo4j: 456688 ms and links size: 2000, nodes size: 2000.
Saved without checking duplicacy of nodes/relations(Removed checkVertex and checkRelation methods):
Total time to save topology data in Neo4j: 446979 ms and links size: 2000, nodes size: 4000 (As we are not checking duplicacy, double nodes has been created).
Code:
public Connection getConnection(String masterNodeIp, String password) throws Exception {
return(Connection)DriverManager.getConnection("jdbc:neo4j:http://"+masterNodeIp+"/?user=neo4j,password="+password+"");
}
//By iterating through edges, Added source and target nodes.
try {
for (Links link : topology.getL2links()) {
if(conn != null) {
long srcId = etGraphIdByUniquenessOfOrphan(clientId,link.getSrcMgmtIP());
GraphId srcGraphId = prepareGraphId(srcId, "DEVICE");
long tgtId = etGraphIdByUniquenessOfOrphan(clientId,link.getTgtMgmtIP());
GraphId tgtGraphId = prepareGraphId(tgtId, "DEVICE");
String srcQuery = createNode(conn, link, false,clientId,discProfileId,
srcGraphId);
if(srcQuery!=null && !srcQuery.isEmpty())
stmt.execute(srcQuery);
String tgtQuery = createNode(conn, link, true,clientId,discProfileId,
tgtGraphId);
if(tgtQuery != null && !tgtQuery.isEmpty())
stmt.execute(tgtQuery);
String relationQuery = processRelation(conn, link,srcGraphId,tgtGraphId);
if(relationQuery!=null && !relationQuery.isEmpty())
stmt.execute(relationQuery);
}
}
} catch(Exception e) {
System.out.println("Exception in processJsonData ::: "+e.getMessage());
throw e;
} finally {
stmt.close();
conn.close();
}
//Before creating node checked whether node is already existed or not in order to avoid duplicacy
private boolean checkVertex(Connection conn, String ip, String hostName, long clientId, long discPId, GraphId graphId) throws Exception{
Statement stmt = null;
ResultSet rs = null;
boolean result=false;
try {
stmt = conn.createStatement();
StringBuffer queryBuffer = new StringBuffer();
queryBuffer.append(" MATCH (node) WHERE node.id ='"+graphId.getId()+"' AND node.sourceType = '"+graphId.getSourceType()+"'");
queryBuffer.append(" RETURN node");
rs = (ResultSet) stmt.executeQuery(queryBuffer.toString());
while(rs.next()) {
result=true;
break;
}
} catch(Exception e) {
System.out.println("Exception in fetching node ::: "+e.getMessage());
throw e;
} finally {
rs.close();
stmt.close();
}
return result;
}
//Before creating Relation also checked duplicacy for relationships.
private boolean checkRelation(Connection conn, Links link, GraphId srcGraphId, GraphId tgtGraphId) throws SQLException {
Statement stmt = null;
ResultSet rs = null;
boolean result=false;
try {
stmt = conn.createStatement();
StringBuffer queryBuffer = new StringBuffer();
queryBuffer.append(" MATCH (src:resource)-[r:topology]->(tgt:resource) WHERE src.id='"+srcGraphId.getId()
+"' AND tgt.id='"+tgtGraphId.getId()+"' AND r.srcInt='"+link.getSrcInt()+"'AND r.tgtInt='"+link.getTgtInt()+"'");
queryBuffer.append(" RETURN r");
rs=(ResultSet) stmt.executeQuery(queryBuffer.toString());
while(rs.next()) {
result=true;
break;
}
}
catch(Exception e) {
System.out.println("Exception in fetching node ::: "+e.getMessage());
} finally {
rs.close();
stmt.close();
}
return result;
}
We created indexes for those duplicacy check queries but still performance is slow.
And also please let us know how to use "Node key" unique constraint in Java level so that we can skip once checkVertex query. We tried to catch "constraintViolationexception" and added log instead of throwing it but it's throwing exception not saving any nodes.
There are a lot of things that you can improve:
for mass data imports use the Java Driver directly, JDBC adds an indirection layer
Use parameters!
Use batching, either with UNWIND or by executing multiple prepared statemts as batch
Don't construct queries with literal values.
Make sure you have indexes/constraints for your keys. Your queries don't use any indexes because you didn't provide any labels!
Use MERGE if you don't want to have constraint exceptions.
Don't use StringBuffer, ever.
Use try-with-resources
Use executeUpdate
For Batching:
https://medium.com/#mesirii/5-tips-tricks-for-fast-batched-updates-of-graph-structures-with-neo4j-and-cypher-73c7f693c8cc
For parameters:
http://neo4j-contrib.github.io/neo4j-jdbc/#_minimum_viable_snippet

Neo4j Embedded 2.2.1: Exception in thread "GC-Monitor" java.lang.OutOfMemoryError: Java heap space

I am trying to do my batch insertion to an existing database but I got the following exception:
Exception in thread "GC-Monitor" java.lang.OutOfMemoryError: Java heap
space at java.util.Arrays.copyOf(Arrays.java:2245) at
java.util.Arrays.copyOf(Arrays.java:2219) at
java.util.ArrayList.grow(ArrayList.java:242) at
java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216) at
java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208) at
java.util.ArrayList.add(ArrayList.java:440) at
java.util.Formatter.parse(Formatter.java:2525) at
java.util.Formatter.format(Formatter.java:2469) at
java.util.Formatter.format(Formatter.java:2423) at
java.lang.String.format(String.java:2792) at
org.neo4j.kernel.impl.cache.MeasureDoNothing.run(MeasureDoNothing.java:64)
Fail: Transaction was marked as successful, but unable to commit
transaction so rolled back.
Here is the structure of my insertion code :
public void parseExecutionRecordFile(Node episodeVersionNode, String filePath, Integer insertionBatchSize) throws Exception {
Gson gson = new Gson();
BufferedReader reader = new BufferedReader(new FileReader(filePath));
String aDataRow = "";
List<ExecutionRecord> executionRecords = new LinkedList<>();
Integer numberOfProcessedExecutionRecords = 0;
Integer insertionCounter = 0;
ExecutionRecord lastProcessedExecutionRecord = null;
Node lastProcessedExecutionRecordNode = null;
Long start = System.nanoTime();
while((aDataRow = reader.readLine()) != null) {
JsonReader jsonReader = new JsonReader(new StringReader(aDataRow));
jsonReader.setLenient(true);
ExecutionRecord executionRecord = gson.fromJson(jsonReader, ExecutionRecord.class);
executionRecords.add(executionRecord);
insertionCounter++;
if(insertionCounter == insertionBatchSize || executionRecord.getType() == ExecutionRecord.Type.END_MESSAGE) {
lastProcessedExecutionRecordNode = appendEpisodeData(episodeVersionNode, lastProcessedExecutionRecordNode, executionRecords, lastProcessedExecutionRecord == null ? null : lastProcessedExecutionRecord.getTraceSequenceNumber());
executionRecords = new LinkedList<>();
lastProcessedExecutionRecord = executionRecord;
numberOfProcessedExecutionRecords += insertionCounter;
insertionCounter = 0;
}
}
}
public Node appendEpisodeData(Node episodeVersionNode, Node previousExecutionRecordNode, List<ExecutionRecord> executionRecordList, Integer traceCounter) {
Iterator<ExecutionRecord> executionRecordIterator = executionRecordList.iterator();
Node previousTraceNode = null;
Node currentTraceNode = null;
Node currentExecutionRecordNode = null;
try (Transaction tx = dbInstance.beginTx()) {
// some graph insertion
tx.success();
return currentExecutionRecordNode;
}
}
So basically, I read json object from a file (ca. 20,000 objects) and insert it to neo4j every 10,000 records. If I have only 10,000 JSON objects in the file, then it works fine. But when I have 20,000, it throws the exception.
Thanks in advance and any help would be really appreciated!
If with 10000 objects works, just try to at least duplicate the heap memory.
Take a look at the following site: http://neo4j.com/docs/stable/server-performance.html
The wrapper.java.maxmemory option could resolve your problem.
As you also insert several k properties all that tx state will be held in memory. So I think 10k batch size is just fine for that amount of heap.
You also don't close your JSON reader so it might linger around with the StringReader inside.
You should also use an ArrayList initialized at your batch-size and use list.clear() instead of recreation/reassignment.

Null Pointer Exception on testng datprovider

I am running in a strange problem. Let me explain:
I am passing set of input data from xml and then using JAXB to parse xml. This java object is then passed to my test method using testng dataprovider.
Here are some related code:
Testdata xml:
<TestData>
<TestDetails>
<testcasename>itemStatusTest</testcasename>
<testcasedetails>App in SUPPRESSED Status</testcasedetails>
<appid>28371</appid>
<status>SUPPRESSED</status>
<marketplace />
</TestDetails>
<TestDetails>
<testcasename>itemStatusTest</testcasename>
<testcasedetails>App in REVIEW Status</testcasedetails>
<appid>22559</appid>
<status>REVIEW</status>
<marketplace />
</TestDetails>
</TestData>
Method which returns object:
private static Object[][] generateTestData(String dataProvider,TestCaseName tcName) throws Exception {
Object[][] obj = null;
try {
JAXBContext jaxbContext = JAXBContext.newInstance(TestData.class);
Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller();
TestData testData = (TestData) jaxbUnmarshaller
.unmarshal(new FileInputStream(new File(dataProvider)
.getAbsoluteFile()));
List<TestDetails> testcaseList = testData.getTestDetails();
obj = new Object[testcaseList.size()][];
for (int i = 0; i < testcaseList.size(); i++) {
if (testcaseList
.get(i)
.getTestcasename()
.equalsIgnoreCase(tcName.testCaseName()))
obj[i] = new Object[] { testcaseList.get(i) };
}
} catch (JAXBException e) {
e.getMessage();
return null;
}
return obj;
}
and my dataprovider:
#DataProvider(parallel = true, name = "TestData")
public Object[][] TestData() {
try {
Object obj[][]= IngestionTestHelper
.generateTestDataForItemStatus(dataProvider);
Reporter.log("Size "+obj.length, true);
return obj;
} catch (Exception e) {
Reporter.log(
"Either XML input is in wrong format or XML is not parsed correctly",
true);
return null;
}
}
Till now everything works like a charm and I am not seeing any issue.
Now i am writing another test method for another test-case. For that I have added following in my exisitng xml like this:
<TestDetails>
<testcasename>itemWorkflowTest</testcasename>
<testcasedetails>Validate workflow for iap</testcasedetails>
<appid>26120</appid>
<status />
<marketplace />
</TestDetails>
Now once i have added this in my existing xml my existing test method is not working. When running I am getting following exception:
java.lang.NullPointerException
at org.testng.internal.Invoker.injectParameters(Invoker.java:1333)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1203)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1197)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1122)
at org.testng.TestNG.run(TestNG.java:1030)
at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
If i remove the newly added block in xml it starts working.
Please someone help!!!
Well, based on the code, and if I understood correctly :)
When you add the third item the name is different,
You have initialized the Object array with the size of the total number of elements,
obj = new Object[testcaseList.size()][];
But you are adding to the array selectively based on name, so though the init has been done for 3 objects, the data is available only for 2 - this may be causing the NPE..
List<TestDetails> testcaseList = testData.getTestDetails();
obj = new Object[testcaseList.size()][];
for (int i = 0; i < testcaseList.size(); i++) {
if (testcaseList
.get(i)
.getTestcasename()
.equalsIgnoreCase(tcName.testCaseName()))
obj[i] = new Object[] { testcaseList.get(i) };
}

Resources