Merging ontologies in OWLAPI with same IRI's - jena

I generally keep my ontologies in two different files.
First ontology file contains the classes, subclasses, data properties and object properties.
The second file containing all the individuals and relationships between the individuals.
So, I need to merge these two files in order to have a complete model. I wonder how this could be achieved using owlapi?
In Jena, I do this as follows:
OntModel model = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM,
null);
try {
model.read(new FileInputStream(MyOntologyFile), "...");
model.read(new FileInputStream(MyOntologyWithIndividualsFile), "...");
} catch (Exception e) {
log.error("Loading Model failed:" + e);
}
In the similar fashion when I tried to load my ontology files using owlapi, I get error:
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLObjectRenderer renderer = new DLSyntaxObjectRenderer();
File file = new File(MyOntologyFile);
File fileIndividuals = new File(MyOntologyWithIndividualsFile);
OWLOntology localOntology = null;
// Now load the local copy
try {
localOntology = manager.loadOntologyFromOntologyDocument(file);
localOntology = manager
.loadOntologyFromOntologyDocument(fileIndividuals);
} catch (OWLOntologyCreationException ex) {
ex.printStackTrace();
}
Error:
org.semanticweb.owlapi.model.OWLOntologyAlreadyExistsException: Ontology already exists. OntologyID(OntologyIRI(<http://www.semanticweb.org/lp4220/ontologies/2014/4/untitled-ontology-35>))
at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntology(OWLOntologyManagerImpl.java:880)
at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntologyFromOntologyDocument(OWLOntologyManagerImpl.java:806)
at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.loadOntologyFromOntologyDocument(OWLOntologyManagerImpl.java:821)
Update:
As it turns out, merging of ontologies is only possible with those having different IRI's, and hence I presume it is not acceptable to divide an ontology into two with the same IRI. A solution for this (as commented by Joshua) may be to read all individuals and axioms from one ontology and then add them to an already loaded ontology.
For ontologies with distinct IRI's merging can be done as follows (example courtesy Ignazio's OWLED 2011 slides - slide no. 27):
OWLOntologyManager m = create();
OWLOntology o1 = m.loadOntology(pizza_iri);
OWLOntology o2 = m.loadOntology(example_iri);
// Create our ontology merger
OWLOntologyMerger merger = new OWLOntologyMerger(m);
// Merge all of the loaded ontologies, specifying an IRI for the
new ontology
IRI mergedOntologyIRI =
IRI.create(
"http://www.semanticweb.com/mymergedont"
);
OWLOntology merged = merger.createMergedOntology(m,
mergedOntologyIRI);
assertTrue(merged.getAxiomCount() > o1.getAxiomCount());
assertTrue(merged.getAxiomCount() > o2.getAxiomCount());

Your problem is not having the same iri in the data but ontologies with the same iris loaded in the same manager. Load the ontologies in separate managers and add all the axioms from one to the other, that will give you a merged ontology.

In general, you do not make "Individuals and Relationships" an Ontology, unless they require for classifications - say to define Class "American Company" you need an Individual "US". Otherwise, that other part is should be your RDF triples that refer to the Ontology.

Related

Saxon - s9api - setParameter as node and access in transformation

we are trying to add parameters to a transformation at the runtime. The only possible way to do so, is to set every single parameter and not a node. We don't know yet how to create a node for the setParameter.
Current setParameter:
QName TEST XdmAtomicValue 24
Expected setParameter:
<TempNode> <local>Value1</local> </TempNode>
We searched and tried to create a XdmNode and XdmItem.
If you want to create an XdmNode by parsing XML, the best way to do it is:
DocumentBuilder db = processor.newDocumentBuilder();
XdmNode node = db.build(new StreamSource(
new StringReader("<doc><elem/></doc>")));
You could also pass a string containing lexical XML as the parameter value, and then convert it to a tree by calling the XPath parse-xml() function.
If you want to construct the XdmNode programmatically, there are a number of options:
DocumentBuilder.newBuildingStreamWriter() gives you an instance of BuildingStreamWriter which extends XmlStreamWriter, and you can create the document by writing events to it using methods such as writeStartElement, writeCharacters, writeEndElement; at the end call getDocumentNode() on the BuildingStreamWriter, which gives you an XdmNode. This has the advantage that XmlStreamWriter is a standard API, though it's not actually a very nice one, because the documentation isn't very good and as a result implementations vary in their behaviour.
Another event-based API is Saxon's Push class; this differs from most push-based event APIs in that rather than having a flat sequence of methods like:
builder.startElement('x');
builder.characters('abc');
builder.endElement();
you have a nested sequence:
Element x = Document.elem('x');
x.text('abc');
x.close();
As mentioned by Martin, there is the "sapling" API: Saplings.doc().withChild(elem(...).withChild(elem(...)) etc. This API is rather radically different from anything you might be familiar with (though it's influenced by the LINQ API for tree construction on .NET) but once you've got used to it, it reads very well. The Sapling API constructs a very light-weight tree in memory (hance the name), and converts it to a fully-fledged XDM tree with a final call of SaplingDocument.toXdmNode().
If you're familiar with DOM, JDOM2, or XOM, you can construct a tree using any of those libraries and then convert it for use by Saxon. That's a bit convoluted and only really intended for applications that are already using a third-party tree model heavily (or for users who love these APIs and prefer them to anything else).
In the Saxon Java s9api, you can construct temporary trees as SaplingNode/SaplingElement/SaplingDocument, see https://www.saxonica.com/html/documentation12/javadoc/net/sf/saxon/sapling/SaplingDocument.html and https://www.saxonica.com/html/documentation12/javadoc/net/sf/saxon/sapling/SaplingElement.html.
To give you a simple example constructing from a Map, as you seem to want to do:
Processor processor = new Processor();
Map<String, String> xsltParameters = new HashMap<>();
xsltParameters.put("foo", "value 1");
xsltParameters.put("bar", "value 2");
SaplingElement saplingElement = new SaplingElement("Test");
for (Map.Entry<String, String> param : xsltParameters.entrySet())
{
saplingElement = saplingElement.withChild(new SaplingElement(param.getKey()).withText(param.getValue()));
}
XdmNode paramNode = saplingElement.toXdmNode(processor);
System.out.println(paramNode);
outputs e.g. <Test><bar>value 2</bar><foo>value 1</foo></Test>.
So the key is to understand that withChild() returns a new SaplingElement.
The code can be compacted using streams e.g.
XdmNode paramNode2 = Saplings.elem("root").withChild(
xsltParameters
.entrySet()
.stream()
.map(p -> Saplings.elem(p.getKey()).withText(p.getValue()))
.collect(Collectors.toList())
.toArray(SaplingElement[]::new))
.toXdmNode(processor);
System.out.println(paramNode2);

Jena read hook not invoked upon duplicate import read

My problem will probably be explained better with code.
Consider the snippet below:
// First read
OntModel m1 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m1,uri0);
m1.loadImports();
// Second read (from the same URI)
OntModel m2 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m2,uri0);
m2.loadImports();
where uri0 points to a valid RDF file describing an ontology model with n imports.
and the following custom ReadHook (which has been set in advance):
#Override
public String beforeRead(Model model, String source, OntDocumentManager odm) {
System.out.println("BEFORE READ CALLED: " + source);
}
Global FileManager and OntDocumentManager are used with the following settings:
processImports = true;
caching = true;
If I run the snippet above, the model will be read from uri0 and beforeRead will be invoked exactly n times (once for each import).
However, in the second read, beforeRead won't be invoked even once.
How, and what should I reset in order for Jena to invoke beforeRead in the second read as well?
What I have tried so far:
At first I thought it was due to caching being on, but turning it off or clearing it between the first and second read didn't do anything.
I have also tried removing all ignoredImport records from m1. Nothing changed.
Finally got to solve this. The problem was in ModelFactory.createOntologyModel(). Ultimately, this gets translated to ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RDFS_INF,null).
All ontology models created with the static OntModelSpec.OWL_MEM_RDFS_INF will have their ImportsModelMaker and some of its other objects shared, which results in a shared state. Apparently, this state has blocked the reading hook to be invoked twice for the same imports.
This can be prevented by creating a custom, independent and non-static OntModelSpec instance and using it when creating an OntModel, for example:
new OntModelSpec( ModelFactory.createMemModelMaker(), new OntDocumentManager(), RDFSRuleReasonerFactory.theInstance(), ProfileRegistry.OWL_LANG );

Jena Ontology API: Add property values for individual through Anonymous classes

For following model
i need to create individuals for class1 and set literal values for property4 and property5 for created individuals.
For this i am creating individual for Anonymous class2(in1) and setting property values for it. Then i create individual for Anonymous class1(in0) and use addproperty(property2,in1), again i create individual for class 1(in) and use addproperty(property1,in0).
String ns = "url.com";
OntModel model = ModelFactory.createOntologyModel(OntModelSpec.RDFS_MEM);
OntClass class1 = model.createClass(ns+"class1");
OntClass Aclass1= model.createClass();
OntClass Aclass2= model.createClass();
OntProperty pro1 = model.createOntProperty(ns + "pro1");
OntProperty pro2 = model.createOntProperty(ns + "pro2");
OntProperty pro3 = model.createOntProperty(ns + "pro3");
DatatypeProperty pro4 = model.createDatatypeProperty(ns + "pro4");
DatatypeProperty pro5 = model.createDatatypeProperty(ns + "pro5");
Individual in1 = Aclass2.createIndividual(ns + "in1");
in1.addProperty( pro4, model.createTypedLiteral( 50 ) )
.addProperty( pro5, model.createTypedLiteral( 60) );
Individual in0=Aclass1.createIndividual(ns+"in2");
in0.addProperty(pro2,in1);
Individual in=class1.createIndividual(ns+"indi");
in.addProperty(pro1, in0);
this is giving following exception when run
Exception in thread "main" com.hp.hpl.jena.ontology.ProfileException: Attempted to use language construct DATATYPE_PROPERTY that is not supported in the current language profile: RDFS
at com.hp.hpl.jena.ontology.impl.OntModelImpl.checkProfileEntry(OntModelImpl.java:3058)
at com.hp.hpl.jena.ontology.impl.OntModelImpl.createDatatypeProperty(OntModelImpl.java:1395)
at com.hp.hpl.jena.ontology.impl.OntModelImpl.createDatatypeProperty(OntModelImpl.java:1375)
at test1.Hello.main(Hello.java:46)
What am i doing wrong and is there a better way for doing this?
The spec is wrong, it does not support owl:DatatypeProperty (and a lot of things from OntModel), but only RDFS vocabulary.
Try OntModelSpec.OWL_DL_MEM. It should eliminate exception.
But note: OntModelSpec#OWL_DL_MEM is about OWL1-DL, not OWL2DL. Jena does not support OWL2 at all.
If you want to use full OWL2DL specification with Jena, you can take a look on ONT-API, which is jena-based OWL-API.

Converting Stanford dependency relation to dot format

I am a newbie to this field. I have dependency relation in this form:
amod(clarity-2, sound-1)
nsubj(good-6, clarity-2)
cop(good-6, is-3)
advmod(good-6, also-4)
neg(good-6, not-5)
root(ROOT-0, good-6)
nsubj(ok-10, camera-8)
cop(ok-10, is-9)
ccomp(good-6, ok-10)
As mentioned in the links we have to convert this dependency relation to dot format and then use Graphviz for drawing a 'dependency tree'. I am not able to understand how to pass this dependency relation to toDotFormat() function of edu.stanford.nlp.semgraph.SemanticGraph. When I give this string, 'amod(clarity-2, sound-1)' as input to toDotFormat() am getting the output in this form digraph amod(clarity-2, sound-1) { }.
I am trying the solution given here how to get a dependency tree with Stanford NLP parser
You need to call toDotFormat on an entire dependency tree. How have you generated these dependency trees in the first place?
If you're using the StanfordCoreNLP pipeline, adding in the toDotFormat call is easy:
Properties properties = new Properties();
props.put("annotators", "tokenize, ssplit, pos, depparse");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
String text = "This is a sentence I want to parse.";
Annotation document = new Annotation(text);
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for (CoreMap sentence : sentences) {
// this is the Stanford dependency graph of the current sentence
SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
System.out.println(dependencies.toDotFormat());
}

Store data in Jena TDB and use reasoning

I have an OWL ontology file as RDF and want to store my data in a TDB and want to use reasoning. Actually this sounds simple so far :)
But here is the point where I'm confuesd:
I created a TDB an stored via SPARQL some statements. Then I tried to load the TDB via a model and OWL reasoner:
OntModelSpec ontModelSpec = OntModelSpec.OWL_MEM;
Reasoner reasoner = ReasonerRegistry.getOWLReasoner();
ontModelSpec.setReasoner(reasoner);
Model schemaModel = FileManager.get().loadModel("D:/Users/jim/Desktop/ontology/schema.rdf");
OntModel schema = ModelFactory.createOntologyModel( ontModelSpec, schemaModel);
Location location = new Location("D:/Users/jim/Desktop/jena-fuseki-0.2.5/DB");
Dataset dataset = TDBFactory.createDataset(location);
Model model = dataset.getDefaultModel();
OntModel ontModel = ModelFactory.createOntologyModel(ontModelSpec, model);
When I now create new resources via API, they are not stored in the TDB. And I'm not able to see the Statments have added via SPARQL?!
The SPAQRL statement shows me only the entries I've added with SPARQL
QueryExecution qExec = QueryExecutionFactory.create(
StrUtils.strjoinNL("SELECT ?s ?p ?prop",
"WHERE {?s ?p ?prop}"),
dataset) ;
ResultSet rs = qExec.execSelect() ;
try {
ResultSetFormatter.out(rs) ;
} finally { qExec.close() ; System.out.println("closed connection");}
and this returns only the Resource added with the API
System.out.print("instance: " + ontModel.getResource(NS + "TestItem"));
And when I call this:
ExtendedIterator<Statement> iter = ontModel.listStatements();
I get the following exception:
org.openjena.atlas.lib.InternalErrorException: Invalid id node for subject (null node): ([0000000000000067], [0000000000000093], [00000000000000C8])
Is someone able to explain that behavior? Or could someone please give me a hint how to separate schema and date with TDB in right way with using the OntModel?
Partial answer:
org.openjena.atlas.lib.InternalErrorException: Invalid id node for subject (null node): ([0000000000000067], [0000000000000093], [00000000000000C8])
You are using TDB without transactions - try adding TDB.sync before exiting to flush changes to the disk.

Resources