XQuery vs. XPath 3.1 (in Saxon) - saxon

We are using Saxon purely to query data. We're about to update to XPath 3.1. For reading queries (no insert/update/delete) is there any difference between XPath 3.1 and XQuery (latest version)?
If so, what? I'm asking to determine if we should implement an XQuery API in our system along with the XPath 3.1?

The main differences are:
XQuery has node constructors (e.g. <out>{/x/y}</out>
XQuery has full FLWOR expressions with order by, group by, window clauses etc.
So XQuery is a bit more powerful for complex queries, but more importantly, it allows construction of a new XML document to represent the result.

Related

Using saxon & XPath 3.1 to parse JSON files

This is for the case of calling Saxon from a Java application. I understand that Saxon can use XPath 3.1 to run queries against JSON files. A couple of question on this:
Where is there an example of how to do this? I've done searches and find lots of answers on details of doing this, but noting on how to read in the file and perform queries. Is it the same as XML?
Is it possible to have a schema file for the JSON so returned values are correctly typed? If so, how?
Is XQuery also able to perform queries on JSON?
What version of Saxon supports this? (We are using 9.9.1.1 and want to know if I need to upgrade.)
Technically, you don't run queries against JSON files; you run them against the data structure that results from parsing a JSON file, which is a structure of maps and arrays. You can parse the JSON file using the parse-json() or json-doc() functions, and then query the result using operators that work on maps and arrays. Some of these (and examples of their use) are shown in the spec at
https://www.w3.org/TR/xpath-31/#id-maps-and-arrays
Googling for "query maps arrays JSON XPath 3.1" finds quite a lot of useful material. Or get Priscilla Walmsley's book: http://www.datypic.com/books/xquery/chapter24.html
Data types: the data types of string, number, and boolean that are intrinsic to JSON are automatically recognized by their form. There's no capability to do further typing using a schema.
XQuery is a superset of XPath, but as far as JSON/Maps/Arrays are concerned, I think the facilities in XPath and those in XQuery are exactly the same.
Saxon has added a bit of extra conformance and performance in each successive release. 9.9 is pretty complete in its coverage; 10.0 adds some optimizations (like a new internal data structure for maps whose keys are all strings, such as you get when you parse JSON). Details of changes in successive Saxon releases are described in copious detail at http://www.saxonica.com/documentation/index.html#!changes

Transform Rascal ASTs into Famix metamodel

is there any support for transforming a Rascal AST into Famix meta model (from Moose technology)?
Rascal uses M3 meta-models that can, in principle, be easily converted to Famix (but you would have to write that mapping yourself).
There is M3 support for several languages (and the support is growing) so it also depends on the language you are interested in whether there is support for fact extraction from your source.

XML Sparql results from rdflib

I am using Graph() in RDFLib, i am correctly getting results of from the graph using sparql. Is it possible to get the results directly in HTML table format?
rdflib is a library to work with rdf in python, not an HTML rendering engine. Usually if you work on a graph.sparql() query, you want to access the result in python itself.
That said, there is a fork focusing on hosting RDF called rdflib-web. In it you can find a htmlresults.py which does pretty much what i think you want.

Can we check the translated function calls generated by Cypher or Gremlin?

I believe internally Cypher / Gremlin translate statement into corresponding Java method calls. Is there a way to trace what method calls in run?
For example, in Hibernate, we can specify "show sql" to see generated sql statement.
[Edit]
The reasonws I want to do that:
1. For Debugging purpose:
To find out why the cypher / gremlin doesn't produce the expected result.
For learning purpose:
To find what's happening under the hood
For optimization:
To find out where the bottleneck is.
In Cypher, that is planned for the coming months to add. Ultimately, yes, currently the methods used under the Hood are the Java Neo4j core API and the Traversal Framework. Mind adding a case that is causing you problem?
For Gremlin, do a .toString() at the end of your Gremlin expression to see which Pipes (http://pipes.tinkerpop.com) it ultimately compiles down to.

Alternative to Rhino for expression parsing in Java

In my java program I use rhino expression evaluator to parse an expression from a table into keys and values which is matched against a given java object . The performance of this evaluator is not that good and on an average takes about 50ms per evaluation. Is these a better / faster version of rhino or any other js interpretor that I can use.
Using compiled js interpretors like spiderMnkey as the whole code is Java.
Google's v8 engine is relatively speedy (and comes with some trimin's/utilities when pulled with node.js), but without knowing a little more about your expressions and comparisons, it's hard to say if you'll get a significant speed boost by using another JavaScript interpreter.

Resources