I have a question.
I have two files: a Model file with data and a OntModel with schema.
I have stored these files ina Jena TDB as described in this post: How I can use Fuseki with Jena TDB
My question is: is it correct to store my schema in Jena TDB?
I have readed a similar question in this post: Load OWL schema into triple-store like Fuskei/TDB?
where it's written that
"Schemas are data. You can load them as you would for data. If you
want inference based on the schema, you don't need to load it - you
need to write a Fuseki configuration that uses your schema with a
inference engine like Jena rules."
How I can write a Fuseki configuration that uses my schema with a inference engine like Jena rules?
Related
Our system must process Avro schemas. Before sending Avro schema file to the server, I want to validate the format of the submitted schema file, to see if it conforms to the Apache Avro specification.
The Avro schema is a Json file, so to do basic validation against the Avro specification, I need a Json schema for the Avro schema file (I know that sounds confusing). Unfortunately, the Apache Avro specification does not provide any definition file for the Avro schema which I could run through a validator.
Does anybody know where I can find a Json Schema defining the structure of the Avro schema file according to the Apache Avro specification?
If you have an Avro file, that file contains the schema itself, and therefore would already be "valid". If the file cannot be created with the schema you've given, then you should get an exception (or, at least, any invalid property would be ignored)
You can get that schema via
java -jar avro-tools.jar getschema file.avro
I'm not aware of a way to use a different schema to get a file without going through the Avro client library reader methods
#Test
void testSchema() throws IOException {
Schema classSchema = FooEvent.getClassSchema();
Schema sourceSchema = new Schema.Parser()
.parse(getClass()
.getResourceAsStream("/path/to/FooEvent.avsc"));
assertThat(classSchema).isEqualTo(sourceSchema);
}
Trying to covert a Db2 query result set to an xml file based on xsl. Can we use the below pattern?
DB2 Connector -> XML_Transformer Stage (imported xsl) - XML _Output Stage.
Thanks...R
There are multiple options assuming you do not have XML already in your Db2 table,
you do not need the XML Transformer.
I strongly suggest you use the modern Hierarchical stage (also known as XML stage depending on the version of DataStage) so I would go for following structure if you want a file or files as a target.
Db2 Connect -> Hierarchical stage -> Sequential File stage
In addition, Db2 offers lots of XML functionality to generate XML by using SQL or XQuery.
With Apache Jena, we can generate FOAF file like this:
model.createResource("http://example.org/alice", FOAF.Person)
.addProperty(FOAF.name, "Alice")
.addProperty(FOAF.mbox, model.createResource("mailto:alice#example.org"))
.addProperty(FOAF.knows, model.createResource("http://example.org/bob"));
I want to generate a SOAF file (extension of FOAF).
Is there any method or API to do this?
Jena has a utility "schemagen" that generates vocabulary files from RDFS. It is how FOAF.java is made. There is nothing special about vocabularies, they don't have to be installed specially in a particular package. Make a SOAF.java and compile it into your program or look at FOAF.java.
I am currently writing some Java code extracting some data and writing them as Linked Data, using the TRIG syntax. I am now using Jena, and Fuseki to create a SPARQL endpoint to query and visualize this data.
The data is written so that each source dataset gives me a .trig file, containing one named graph. So I want to load thoses files in Fuseki. Except that it doesn't seem to understand the Trig syntax...
If I remove the named graphs, and rename the files as .ttl, everything loads perfectly in the default graphs. But if I try to import trig files :
using Fuseki's webapp uploader, it either crashes ("Can't make new graphs") or adds nothing except the prefixes, as if the graphs other than the default ones could not be added (the logs say nothing helpful except the error code and description).
using Java code, the process is too slow. I used this technique : " Loading a .trig file into TDB? " but my trig files are pretty big, so this solution is not very good for me.
So I tried to use the bulk loader, the console command 'tdbloader'. This time everything seems fine, but in the webapp, there is still no data.
You can see the process going fine here : Quads are added just fine
But the result still keeps only the default graph and its original data : Nothing is added
So, I don't know what to do. The guys behind Jena and Fuseki suggested not to use the bulk loader in the Java code (rather than the command line tool), so that's one solution I guess I'd like to avoid.
Did I miss something obvious about how to load TRIG files to Fuseki? Thanks.
UPDATE :
As it seemed to be a problem in my configuration (see the comments of this post for a link to my config file; I cannot post more than 2 links), I tried to add some kind of specification for some named graphs I would like to see added to the dataset on Fuseki.
I added code to link (with ja:namedgraph) external graphs that I added via tdbloader. This seems to work. Great!
Now another problem : there's no inference, even when my config file specifies an Inference model... I set that queries should be applied with named graphs merged as the default graph, but this does not seem to carry the OWL Inference rules...So simple queries work, but I have 1/ to specify the graph I query (with "FROM") and 2/ no inference in my data.
The two methods are to use the tdb bulkloader offline or you can POST data into the dataset directly. (i.e. HTTP POST operations to http://localhost:3030/ds).
You can test where your graph are there with a query like
SELECT (count(*) AS ?C) { GRAPH ?g { ?s ?p ?o } }
The named graphs will show up when the Fuseki server is started unless your configuration of the SPARQL services only exports one graph.
Hi I'm new to ontology storing :)
Actually I'm looking for a triplestore with Java interoperability (Jena). So I choose Apache Fuseki.
In the documentation I found the ja:MemoryModel for loading ontologies. But does this mean the data is lost when I shut down the server?
Another idea is to use some kind of ontology schema. This means I want to use 1 ontology as schema and a second one for storing the entities. In the example configuration.ttl I found something like that:
ja:baseModel
[ a ja:MemoryModel ;
ja:content [ja:externalContent <file:Data/test_abox.ttl>] ;
ja:content [ja:externalContent <file:Data/test_tbox.ttl>] ;
] ;
But I couldn’t found a real explanation for the baseModel and in the documentation there is also the OntModel mentioned. Which one to use for schema and which one for entities. For me as newcomer it’s a little bit confusing?
Could someone be so kind as to give me a hint for that?
Thanks!
You can run the server with a persistent database. Start the server with --loc=DB and it will use it's copy of Jena TDB as the datastore.
Or you can use the assembler and configure in a TDB-backed datastore and a model from that.