With jena, how do I add dataProperty to an Individual - jena

I want to add an instance to my ontology file that has data attributes, but when I add dataProperty, I find that the added data cannot specify the data type.
my Code:
public final static String NAME_SPACE = "http://www.semanticweb.org/tangyuan/ontologies/myOntology#";
public static void main(String[] args) throws FileNotFoundException {
OntModel model1 = ModelFactory.createOntologyModel(OntModelSpec.OWL_DL_MEM);
model1.read("file:./myOntology.owl");
OntClass oc = model1.getOntClass(NAME_SPACE + "患者");//获得病人类
Individual individual = oc.createIndividual(NAME_SPACE + "jane");
DatatypeProperty dp = model1.getDatatypeProperty(NAME_SPACE + "twoh血糖值");
model1.add(individual, dp, "7.0");
RDFWriter rdfWriter = model1.getWriter("RDF/XML");
rdfWriter.write(model1, new FileOutputStream("myOntology.owl"), "RDF/XML");
System.out.println("ok");
}
the myontology.owl file result
<myOntology:患者 rdf:about="http://www.semanticweb.org/tangyuan/ontologies/myOntology#jane">
<myOntology:twoh血糖值>7.0</myOntology:twoh血糖值>
the dataProperty is
<myOntology:twoh血糖值>7.0</myOntology:twoh血糖值>
but I want
<myOntology:twoh血糖值 rdf:datatype="http://www.w3.org/2001/XMLSchema#float">7.0</myOntology:twoh血糖值>
how can I add
rdf:datatype="http://www.w3.org/2001/XMLSchema#float"
to the dataProperty value?
Can anyone help me. thankyou

Related

Does Apache Beam support custom file names for its output?

While in a distributed processing environment it is common to use "part" file names such as "part-000", is it possible to write an extension of some sort to rename the individual output file names (such as a per window file name) of Apache Beam?
To do this, one might have to be able to assign a name for a window or infer a file name based on the window's content. I would like to know if such an approach is possible.
As to whether the solution should be streaming or batch, a streaming mode example is preferable
Yes as suggested by jkff you can achieve this using TextIO.write.to(FilenamePolicy).
Examples are below:
If you want to write output to particular local file you can use:
lines.apply(TextIO.write().to("/path/to/file.txt"));
Below is the simple way to write the output using the prefix, link. This example is for google storage, instead of this you can use local/s3 paths.
public class MinimalWordCountJava8 {
public static void main(String[] args) {
PipelineOptions options = PipelineOptionsFactory.create();
// In order to run your pipeline, you need to make following runner specific changes:
//
// CHANGE 1/3: Select a Beam runner, such as BlockingDataflowRunner
// or FlinkRunner.
// CHANGE 2/3: Specify runner-required options.
// For BlockingDataflowRunner, set project and temp location as follows:
// DataflowPipelineOptions dataflowOptions = options.as(DataflowPipelineOptions.class);
// dataflowOptions.setRunner(BlockingDataflowRunner.class);
// dataflowOptions.setProject("SET_YOUR_PROJECT_ID_HERE");
// dataflowOptions.setTempLocation("gs://SET_YOUR_BUCKET_NAME_HERE/AND_TEMP_DIRECTORY");
// For FlinkRunner, set the runner as follows. See {#code FlinkPipelineOptions}
// for more details.
// options.as(FlinkPipelineOptions.class)
// .setRunner(FlinkRunner.class);
Pipeline p = Pipeline.create(options);
p.apply(TextIO.read().from("gs://apache-beam-samples/shakespeare/*"))
.apply(FlatMapElements
.into(TypeDescriptors.strings())
.via((String word) -> Arrays.asList(word.split("[^\\p{L}]+"))))
.apply(Filter.by((String word) -> !word.isEmpty()))
.apply(Count.<String>perElement())
.apply(MapElements
.into(TypeDescriptors.strings())
.via((KV<String, Long> wordCount) -> wordCount.getKey() + ": " + wordCount.getValue()))
// CHANGE 3/3: The Google Cloud Storage path is required for outputting the results to.
.apply(TextIO.write().to("gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX"));
p.run().waitUntilFinish();
}
}
This example code will give you more control on writing the output:
/**
* A {#link FilenamePolicy} produces a base file name for a write based on metadata about the data
* being written. This always includes the shard number and the total number of shards. For
* windowed writes, it also includes the window and pane index (a sequence number assigned to each
* trigger firing).
*/
protected static class PerWindowFiles extends FilenamePolicy {
private final ResourceId prefix;
public PerWindowFiles(ResourceId prefix) {
this.prefix = prefix;
}
public String filenamePrefixForWindow(IntervalWindow window) {
String filePrefix = prefix.isDirectory() ? "" : prefix.getFilename();
return String.format(
"%s-%s-%s", filePrefix, formatter.print(window.start()), formatter.print(window.end()));
}
#Override
public ResourceId windowedFilename(int shardNumber,
int numShards,
BoundedWindow window,
PaneInfo paneInfo,
OutputFileHints outputFileHints) {
IntervalWindow intervalWindow = (IntervalWindow) window;
String filename =
String.format(
"%s-%s-of-%s%s",
filenamePrefixForWindow(intervalWindow),
shardNumber,
numShards,
outputFileHints.getSuggestedFilenameSuffix());
return prefix.getCurrentDirectory().resolve(filename, StandardResolveOptions.RESOLVE_FILE);
}
#Override
public ResourceId unwindowedFilename(
int shardNumber, int numShards, OutputFileHints outputFileHints) {
throw new UnsupportedOperationException("Unsupported.");
}
}
#Override
public PDone expand(PCollection<InputT> teamAndScore) {
if (windowed) {
teamAndScore
.apply("ConvertToRow", ParDo.of(new BuildRowFn()))
.apply(new WriteToText.WriteOneFilePerWindow(filenamePrefix));
} else {
teamAndScore
.apply("ConvertToRow", ParDo.of(new BuildRowFn()))
.apply(TextIO.write().to(filenamePrefix));
}
return PDone.in(teamAndScore.getPipeline());
}
Yes. Per documentation of TextIO:
If you want better control over how filenames are generated than the default policy allows, a custom FilenamePolicy can also be set using TextIO.Write.to(FilenamePolicy)
This is perfectly valid example with beam 2.1.0. You can call on your data (PCollection e.g)
import org.apache.beam.sdk.io.FileBasedSink.FilenamePolicy;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.fs.ResolveOptions.StandardResolveOptions;
import org.apache.beam.sdk.io.fs.ResourceId;
import org.apache.beam.sdk.transforms.display.DisplayData;
#SuppressWarnings("serial")
public class FilePolicyExample {
public static void main(String[] args) {
FilenamePolicy policy = new WindowedFilenamePolicy("somePrefix");
//data
data.apply(TextIO.write().to("your_DIRECTORY")
.withFilenamePolicy(policy)
.withWindowedWrites()
.withNumShards(4));
}
private static class WindowedFilenamePolicy extends FilenamePolicy {
final String outputFilePrefix;
WindowedFilenamePolicy(String outputFilePrefix) {
this.outputFilePrefix = outputFilePrefix;
}
#Override
public ResourceId windowedFilename(
ResourceId outputDirectory, WindowedContext input, String extension) {
String filename = String.format(
"%s-%s-%s-of-%s-pane-%s%s%s",
outputFilePrefix,
input.getWindow(),
input.getShardNumber(),
input.getNumShards() - 1,
input.getPaneInfo().getIndex(),
input.getPaneInfo().isLast() ? "-final" : "",
extension);
return outputDirectory.resolve(filename, StandardResolveOptions.RESOLVE_FILE);
}
#Override
public ResourceId unwindowedFilename(
ResourceId outputDirectory, Context input, String extension) {
throw new UnsupportedOperationException("Expecting windowed outputs only");
}
#Override
public void populateDisplayData(DisplayData.Builder builder) {
builder.add(DisplayData.item("fileNamePrefix", outputFilePrefix)
.withLabel("File Name Prefix"));
}
}
}
You can check https://beam.apache.org/releases/javadoc/2.3.0/org/apache/beam/sdk/io/FileIO.html for more information, you should search "File naming" in "Writing files".
.apply(
FileIO.<RootElement>write()
.via(XmlIO
.sink(RootElement.class)
.withRootElement(ROOT_XML_ELEMENT)
.withCharset(StandardCharsets.UTF_8))
.to(FILE_PATH)
.withNaming((window, pane, numShards, shardIndex, compression) -> NEW_FILE_NAME)

Dataflow DynamicDestinations unable to serialize org.apache.beam.sdk.io.gcp.bigquery.PrepareWrite

I am trying to use DynamicDestinations to write to a partitioned table in BigQuery where the partition name is mytable$yyyyMMdd. If I bypass dynamicdestinations and supply a hardcoded table name in .to(), it works; however, with dynamicdestinations I get the following exception:
java.lang.IllegalArgumentException: unable to serialize org.apache.beam.sdk.io.gcp.bigquery.PrepareWrite$1#6fff253c
at org.apache.beam.sdk.util.SerializableUtils.serializeToByteArray(SerializableUtils.java:53)
at org.apache.beam.sdk.util.SerializableUtils.clone(SerializableUtils.java:90)
at org.apache.beam.sdk.transforms.ParDo$SingleOutput.<init>(ParDo.java:591)
at org.apache.beam.sdk.transforms.ParDo.of(ParDo.java:435)
at org.apache.beam.sdk.io.gcp.bigquery.PrepareWrite.expand(PrepareWrite.java:51)
at org.apache.beam.sdk.io.gcp.bigquery.PrepareWrite.expand(PrepareWrite.java:36)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:514)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:473)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:297)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expandTyped(BigQueryIO.java:987)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expand(BigQueryIO.java:972)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expand(BigQueryIO.java:659)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:514)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:454)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:284)
at com.homedepot.payments.monitoring.eventprocessor.MetricsAggregator.main(MetricsAggregator.java:82)
Caused by: java.io.NotSerializableException: com.google.api.services.bigquery.model.TableReference
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
And here is the code:
PCollection<Event> rawEvents = pipeline
.apply("ReadFromPubSub",
PubsubIO.readProtos(EventOuterClass.Event.class)
.fromSubscription(OPTIONS.getSubscription())
)
.apply("Parse", ParDo.of(new ParseFn()))
.apply("ExtractAttributes", ParDo.of(new ExtractAttributesFn()));
EventTable table = new EventTable(OPTIONS.getProjectId(), OPTIONS.getMetricsDatasetId(), OPTIONS.getRawEventsTable());
rawEvents.apply(BigQueryIO.<Event>write()
.to(new DynamicDestinations<Event, String>() {
private static final long serialVersionUID = 1L;
#Override
public TableSchema getSchema(String destination) {
return table.schema();
}
#Override
public TableDestination getTable(String destination) {
return new TableDestination(table.reference(), null);
}
#Override
public String getDestination(ValueInSingleWindow<Event> element) {
String dayString = DateTimeFormat.forPattern("yyyyMMdd").withZone(DateTimeZone.UTC).toString();
return table.reference().getTableId() + "$" + dayString;
}
})
.withFormatFunction(new SerializableFunction<Event, TableRow>() {
public TableRow apply(Event event) {
TableRow row = new TableRow();
Event evnt = (Event) event;
row.set(EventTable.Field.VERSION.getName(), evnt.getVersion());
row.set(EventTable.Field.TIMESTAMP.getName(), evnt.getTimestamp() / 1000);
row.set(EventTable.Field.EVENT_TYPE_ID.getName(), evnt.getEventTypeId());
row.set(EventTable.Field.EVENT_ID.getName(), evnt.getId());
row.set(EventTable.Field.LOCATION.getName(), evnt.getLocation());
row.set(EventTable.Field.SERVICE.getName(), evnt.getService());
row.set(EventTable.Field.HOST.getName(), evnt.getHost());
row.set(EventTable.Field.BODY.getName(), evnt.getBody());
return row;
}
})
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
);
Any pointers in the correct direction would be greatly appreciated.
Thanks!
From inspecting the exception message and the code above, it seems that the EventTable field used within your anonymous DynamicDestinations class contains a TableReference field which is not serializable.
One workaround would be to convert the anonymous DynamicDestinations to a static inner class and define a constructor which stores only the serializable pieces of the EventTable needed to implement the interface.
For example:
private static class EventDestinations extends DynamicDestinations<Event, String> {
private final TableSchema schema;
private final TableDestination destination;
private final String tableId;
private EventDestinations(EventTable table) {
this.schema = table.schema();
this.destination = new TableDestination(table.reference(), null);
this.tableId = table.reference().getTableId();
}
// ..
}
Looks like you're trying to fill a specific partition based on the event. Why not use:
SerializableFunction<ValueInSingleWindow<Event>, TableDestination>?

DymanicDestinations in Apache Beam

I have a PCollection [String] say "X" that I need to dump in a BigQuery table.
The table destination and the schema for it is in a PCollection[TableRow] say "Y".
How to accomplish this in the simplest manner?
I tried extracting the table and schema from "Y" and saving it in static global variables (tableName and schema respectively). But somehow oddly the BigQueryIO.writeTableRows() always gets the value of the variable tableName as null. But it gets the schema. I tried logging the values of those variables and I can see the values are there for both.
Here is my pipeline code:
static String tableName;
static TableSchema schema;
PCollection<String> read = p.apply("Read from input file",
TextIO.read().from(options.getInputFile()));
PCollection<TableRow> tableRows = p.apply(
BigQueryIO.read().fromQuery(NestedValueProvider.of(
options.getfilename(),
new SerializableFunction<String, String>() {
#Override
public String apply(String filename) {
return "SELECT table,schema FROM `BigqueryTest.configuration` WHERE file='" + filename +"'";
}
})).usingStandardSql().withoutValidation());
final PCollectionView<List<String>> dataView = read.apply(View.asList());
tableRows.apply("Convert data read from file to TableRow",
ParDo.of(new DoFn<TableRow,TableRow>(){
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
String[] schemas = c.element().get("schema").toString().split(",");
List<TableFieldSchema> fields = new ArrayList<>();
for(int i=0;i<schemas.length;i++) {
fields.add(new TableFieldSchema()
.setName(schemas[i].split(":")[0]).setType(schemas[i].split(":")[1]));
}
schema = new TableSchema().setFields(fields);
//My code to convert data to TableRow format.
}}).withSideInputs(dataView));
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
Everything works fine. Only BigQueryIO.write operation fails and I get the error TableId is null.
I also tried using SerializableFunction and returning the value from there but i still get null.
Here is the code that I tried for it:
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to(new GetTable(tableName))
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
public static class GetTable implements SerializableFunction<String,String> {
String table;
public GetTable() {
this.table = tableName;
}
#Override
public String apply(String arg0) {
return "ProjectId:DatasetId."+table;
}
}
I also tried using DynamicDestinations but I get an error saying schema is not provided. Honestly I'm new to the concept of DynamicDestinations and I'm not sure that I'm doing it correctly.
Here is the code that I tried for it:
tableRows2.apply(BigQueryIO.writeTableRows()
.to(new DynamicDestinations<TableRow, TableRow>() {
private static final long serialVersionUID = 1L;
#Override
public TableDestination getTable(TableRow dest) {
List<TableRow> list = sideInput(bqDataView); //bqDataView contains table and schema
String table = list.get(0).get("table").toString();
String tableSpec = "ProjectId:DatasetId."+table;
String tableDescription = "";
return new TableDestination(tableSpec, tableDescription);
}
public String getSideInputs(PCollectionView<List<TableRow>> bqDataView) {
return null;
}
#Override
public TableSchema getSchema(TableRow destination) {
return schema; //schema is getting added from the global variable
}
#Override
public TableRow getDestination(ValueInSingleWindow<TableRow> element) {
return null;
}
}.getSideInputs(bqDataView)));
Please let me know what I'm doing wrong and which path I should take.
Thank You.
Part of the reason your having trouble is because of the two stages of pipeline execution. First the pipeline is constructed on your machine. This is when all of the applications of PTransforms occur. In your first example, this is when the following lines are executed:
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
The code within a ParDo however runs when your pipeline executes, and it does so on many machines. So the following code runs much later than the pipeline construction:
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
...
schema = new TableSchema().setFields(fields);
...
}
This means that neither the tableName nor the schema fields will be set at when the BigQueryIO sink is created.
Your idea to use DynamicDestinations is correct, but you need to move the code to actually generate the schema the destination into that class, rather than relying on global variables that aren't available on all of the machines.

classify new image on trained custom model in deeplearning4j (convolutional network)

Im new to deeplearning4J. I already experimented with its word2vec functionality and everything was fine. But now I am little bit confused regarding image classification. I was playing with this example:
https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/AnimalsClassification.java
I changed the "save" flag to true and my model is stored into model.bin file.
Now comes the problematic part (I am sorry if this sounds as silly question, maybe I am missing something really obvious here)
I created separate class called AnimalClassifier and its purpose is to load model from model.bin file, restore neural network from it and then classify single image using restored network. For this single image I created "temp" folder -> dl4j-examples/src/main/resources/animals/temp/ where I put picture of polar bear that was previously used in training process in AnimalsClassification.java (I wanted to be sure that image would be classified correctly - therefore I reused picture from "bear" folder).
This my code trying to classify polar bear:
protected static int height = 100;
protected static int width = 100;
protected static int channels = 3;
protected static int numExamples = 1;
protected static int numLabels = 1;
protected static int batchSize = 10;
protected static long seed = 42;
protected static Random rng = new Random(seed);
protected static int listenerFreq = 1;
protected static int iterations = 1;
protected static int epochs = 7;
protected static double splitTrainTest = 0.8;
protected static int nCores = 2;
protected static boolean save = true;
protected static String modelType = "AlexNet"; //
public static void main(String[] args) throws Exception {
String basePath = FilenameUtils.concat(System.getProperty("user.dir"), "dl4j-examples/src/main/resources/");
MultiLayerNetwork multiLayerNetwork = ModelSerializer.restoreMultiLayerNetwork(basePath + "model.bin", true);
ParentPathLabelGenerator labelMaker = new ParentPathLabelGenerator();
File mainPath = new File(System.getProperty("user.dir"), "dl4j-examples/src/main/resources/animals/temp/");
FileSplit fileSplit = new FileSplit(mainPath, NativeImageLoader.ALLOWED_FORMATS, rng);
BalancedPathFilter pathFilter = new BalancedPathFilter(rng, labelMaker, numExamples, numLabels, batchSize);
InputSplit[] inputSplit = fileSplit.sample(pathFilter, 1);
InputSplit analysedData = inputSplit[0];
ImageRecordReader recordReader = new ImageRecordReader(height, width, channels);
recordReader.initialize(analysedData);
DataSetIterator dataIter = new RecordReaderDataSetIterator(recordReader, batchSize, 0, 4);
while (dataIter.hasNext()) {
DataSet testDataSet = dataIter.next();
String expectedResult = testDataSet.getLabelName(0);
List<String> predict = multiLayerNetwork.predict(testDataSet);
String modelResult = predict.get(0);
System.out.println("\nFor example that is labeled " + expectedResult + " the model predicted " + modelResult + "\n\n");
}
}
After running this, I get error:
java.lang.UnsupportedOperationException
at org.datavec.api.writable.ArrayWritable.toInt(ArrayWritable.java:47)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.getDataSet(RecordReaderDataSetIterator.java:275)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:186)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:389)
at org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator.next(RecordReaderDataSetIterator.java:52)
at org.deeplearning4j.examples.convolution.AnimalClassifier.main(AnimalClassifier.java:66)
Disconnected from the target VM, address: '127.0.0.1:63967', transport: 'socket'
Exception in thread "main" java.lang.IllegalStateException: Label names are not defined on this dataset. Add label names in order to use getLabelName with an id.
at org.nd4j.linalg.dataset.DataSet.getLabelName(DataSet.java:1106)
at org.deeplearning4j.examples.convolution.AnimalClassifier.main(AnimalClassifier.java:68)
I can see there is a method public void setLabels(INDArray labels) in MultiLayerNetwork.java but I don't get how to use (especially when it takes as argument INDArray).
I am also confused why I have to specify number of possible labels in constructor of RecordReaderDataSetIterator. I would expect that model already knows which labels to use (should not it use labels that were used during training automatically?). I guess, maybe I am loading the picture in completely wrong way...
So to summarize, I would like to achieve simply following:
restore network from model (this is working)
load image to be classified (also working)
classify this image using the same labels that were used during training (bear, deer, duck, turtle) (tricky part)
Thank you in advance for your help or any hints !
So summarizing your multiple questions here:
A record for images is 2 entries in a collection. The second 1 is the label. The label index is relative to the kind of record you pass in.
The second part of your question:
Multiple entries can be apart of a dataset. The list refers to a label for an item at a particular row in the minibatch.

How to add an individual to a class using OWL API?

I wand to add an individual to a class, and I referenced the doc in OWL API official site.
Here is my code.
public void addIndividualsToClass(String className, String indName) throws OWLOntologyStorageException{
/*
* Add an individual to input class
*/
OWLClass tClass = fac.getOWLClass(IRI.create(NS + className));
OWLNamedIndividual tIndividual = fac.getOWLNamedIndividual(IRI.create(NS + indName));
OWLClassAssertionAxiom classAssertion = fac.getOWLClassAssertionAxiom(tClass, tIndividual);
manager.addAxiom(ont, classAssertion);
manager.saveOntology(ont, new StreamDocumentTarget(new ByteArrayOutputStream()));
}
Then, eclipse throws this exception.
Exception in thread "main" java.lang.IllegalArgumentException: Comparison method violates its general contract!
at java.util.ComparableTimSort.mergeLo(ComparableTimSort.java:714)
at java.util.ComparableTimSort.mergeAt(ComparableTimSort.java:451)
at java.util.ComparableTimSort.mergeCollapse(ComparableTimSort.java:376)
at java.util.ComparableTimSort.sort(ComparableTimSort.java:182)
at java.util.ComparableTimSort.sort(ComparableTimSort.java:146)
at java.util.Arrays.sort(Arrays.java:472)
at java.util.Collections.sort(Collections.java:155)
at org.coode.owlapi.owlxml.renderer.OWLXMLObjectRenderer.visit(OWLXMLObjectRenderer.java:184)
at uk.ac.manchester.cs.owl.owlapi.OWLOntologyImpl.accept(OWLOntologyImpl.java:1630)
at org.coode.owlapi.owlxml.renderer.OWLXMLRenderer.render(OWLXMLRenderer.java:106)
at org.coode.owlapi.owlxml.renderer.OWLXMLOntologyStorer.storeOntology(OWLXMLOntologyStorer.java:73)
at org.semanticweb.owlapi.util.AbstractOWLOntologyStorer.storeOntology(AbstractOWLOntologyStorer.java:174)
at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.saveOntology(OWLOntologyManagerImpl.java:870)
at uk.ac.manchester.cs.owl.owlapi.OWLOntologyManagerImpl.saveOntology(OWLOntologyManagerImpl.java:861)
at Test.addIndividualsToClass(Test.java:146)
at Test.main(Test.java:155)
Can someone help me?
This should work. You should take a look at the Examples.java on the OWL-API page at http://owlapi.sourceforge.net/index.html
public static void createNewOnto() throws OWLOntologyCreationException,
OWLOntologyStorageException {
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
IRI ontologyIRI = IRI.create("http://example.com/owlapi/families");
OWLOntology ont = manager.createOntology(ontologyIRI);
OWLDataFactory factory = manager.getOWLDataFactory();
OWLIndividual john = factory.getOWLNamedIndividual(IRI
.create(ontologyIRI + "#John"));
OWLIndividual mary = factory.getOWLNamedIndividual(IRI
.create(ontologyIRI + "#Mary"));
OWLIndividual susan = factory.getOWLNamedIndividual(IRI
.create(ontologyIRI + "#Susan"));
OWLIndividual bill = factory.getOWLNamedIndividual(IRI
.create(ontologyIRI + "#Bill"));
OWLObjectProperty hasWife = factory.getOWLObjectProperty(IRI
.create(ontologyIRI + "#hasWife"));
OWLObjectPropertyAssertionAxiom axiom1 = factory
.getOWLObjectPropertyAssertionAxiom(hasWife, john, mary);
AddAxiom addAxiom1 = new AddAxiom(ont, axiom1);
// Now we apply the change using the manager.
manager.applyChange(addAxiom1);
System.out.println("RDF/XML: ");
manager.saveOntology(ont, new StreamDocumentTarget(System.out));
}
just mentioning that some examples and methods are deprecated
The following works with version 5 of the OWL-API. It shows how to add a class assertion axiom to the ontology, i.e. asserting an individual is the instance of a class.
public void addIndividualsToClass(String className, String indName) throws Exception{
PrefixManager pm = new DefaultPrefixManager(NS); // assuming NS is string and is your namespace
OWLClass tClass = fac.getOWLClass("#" + className, pm); //assuming '#' is your delimiter
OWLIndividual tIndividual = fac.getOWLNamedIndividual("#" + indName, pm);
OWLClassAssertionAxiom classAssertion = fac.getOWLClassAssertionAxiom(tClass, tIndividual);
manager.addAxiom(ont, classAssertion);
manager.saveOntology(ont);
}

Resources