I see jena document that said jena support Derivations. But, my code can't derive.
This is my code:
import org.apache.jena.rdf.model.*;
import org.apache.jena.reasoner.Derivation;
import org.apache.jena.reasoner.ValidityReport;
import org.apache.jena.reasoner.rulesys.RuleDerivation;
import org.apache.jena.util.FileManager;
import org.apache.jena.vocabulary.*;
import java.io.*;
import java.util.Iterator;
public class Qi2 extends Object {
private static String fnameschema = "./data/Qischeme.rdf";
private static String fnameinstance = "./data/QiData.rdf";
public static void main (String args[]) {
// create an empty model
Model schema = FileManager.get().loadModel(fnameschema);
Model data = FileManager.get().loadModel(fnameinstance);
InfModel infmodel = ModelFactory.createRDFSModel(schema, data);
int k = 0;
final PrintWriter out = new PrintWriter(System.out);
Resource wilson = infmodel.getResource("http://www.example.org/ustb#Wilson_Harvey");
Resource person = infmodel.getResource("http://www.example.org/ustb#Person");
for (StmtIterator i = infmodel.listStatements(wilson, (Property) null, person); i.hasNext(); ) {
Statement s = i.nextStatement();
System.out.println(s);
final Iterator<Derivation> derivations = infmodel.getDerivation(s);
assert( null != derivations );
if (derivations.hasNext())
System.out.println("have vaule");
k++;
}
System.out.println(k);
}
}
The code forever can't enter if (derivations.hasNext()) part. I want to know whether jena support RDFS reasoner getDerivation?
You have to set the PROPderivationLogging parameter of the reasoner to true, which can be done as follows:
Model schema = FileManager.get().loadModel(dataFile);
Model data = FileManager.get().loadModel(schemaFile);
Resource config = ModelFactory.createDefaultModel()
.createResource()
.addProperty(ReasonerVocabulary.PROPderivationLogging, "true");
Reasoner reasoner = RDFSRuleReasonerFactory.theInstance().create(config);
InfModel infModel = ModelFactory.createInfModel(reasoner, schema, data);
Related
I am using Apache Jena from within Tomcat 8 under Windows 10 (Eclipse IDE) and am not able to initialise the TDB Dataset. The initialisation code is put in the static initialiser inside a try-catch block but no exception is being thrown and the finally clause is getting invoked. I tried with relative directory names, absolute path names, as well as empty path (in memory dataset). The
dataset remains null and so triples cannot be written. What do I need to change in the code
to initialise the dataset?
Here is the code:
package knowledgegraph;
import org.apache.jena.tdb.TDBFactory;
import org.apache.jena.rdf.model.*;
import org.apache.jena.shared.JenaException;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.OutputStreamWriter;
import org.apache.jena.query.Dataset;
import org.apache.jena.query.ReadWrite;
public class JenaProcessor {
static Dataset dataset = null;
static String ns = "http://www.lke.com/lke.owl#";
static {
try {
// dataset = TDBFactory.createDataset("lke");
// dataset = TDBFactory.createDataset("C:\\Users\\Diptendu\\Desktop\\lke");
dataset = TDBFactory.createDataset();
System.out.println("TDB initialised");
}
// catch(Exception ex) {
catch(JenaException ex) {
ex.printStackTrace();
}
finally {
System.out.println("Finally clause");
}
}
static public void writeTriple(String corpus_file_id, String subject, String predicate, String object) {
dataset.begin(ReadWrite.WRITE) ;
Model model = null;
try {
model = dataset.getNamedModel(corpus_file_id);
// model.enterCriticalSection(Lock.WRITE);
// write triples to model
Resource subjectResource = model.createResource(ns.concat(subject));
Property property = model.createProperty(ns.concat(predicate));
Resource objectResource = model.createResource(ns.concat(object));
// model.add(subjectResource, property, objectResource);
Statement statement = model.createStatement(subjectResource, property, objectResource);
model.add(statement);
dataset.commit();
// TDB.sync(model);
} finally {
// model.leaveCriticalSection();
model.close();
dataset.end();
}
}
}
sked and responded to on users#jena:
https://lists.apache.org/thread.html/r9f788bf21ceb3991329ab0ba3c649d94f2983f92aa3c0a76af788e52%40%3Cusers.jena.apache.org%3E
It's because you are trying to close the model after you've committed
the transaction so the error message is quite correct in that you are
no longer in a transaction at that point
Put the dataset.commit() after the model.close() line and it will work
Rob
I am using Lucene 6.6 and I am facing difficulty in importing lucene.queryparser and I did check the lucene documentations and it doesn't exist now.
I am using below code. Is there any alternative for queryparser in lucene6.
import java.io.IOException;
import java.text.ParseException;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopScoreDocCollector;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
public class HelloLucene {
public static void main(String[] args) throws IOException, ParseException {
// 0. Specify the analyzer for tokenizing text.
// The same analyzer should be used for indexing and searching
StandardAnalyzer analyzer = new StandardAnalyzer();
// 1. create the index
Directory index = new RAMDirectory();
IndexWriterConfig config = new IndexWriterConfig(analyzer);
IndexWriter w = new IndexWriter(index, config);
addDoc(w, "Lucene in Action", "193398817");
addDoc(w, "Lucene for Dummies", "55320055Z");
addDoc(w, "Managing Gigabytes", "55063554A");
addDoc(w, "The Art of Computer Science", "9900333X");
w.close();
// 2. query
String querystr = args.length > 0 ? args[0] : "lucene";
// the "title" arg specifies the default field to use
// when no field is explicitly specified in the query.
Query q = null;
try {
q = new QueryParser(Version.LUCENE_6_6_0, "title", analyzer).parse(querystr);
} catch (org.apache.lucene.queryparser.classic.ParseException e) {
e.printStackTrace();
}
// 3. search
int hitsPerPage = 10;
IndexReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage, true);
searcher.search(q, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
// 4. display results
System.out.println("Found " + hits.length + " hits.");
for (int i = 0; i < hits.length; ++i) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println((i + 1) + ". " + d.get("isbn") + "\t" + d.get("title"));
}
// reader can only be closed when there
// is no need to access the documents any more.
reader.close();
}
private static void addDoc(IndexWriter w, String title, String isbn) throws IOException {
Document doc = new Document();
doc.add(new TextField("title", title, Field.Store.YES));
// use a string field for isbn because we don't want it tokenized
doc.add(new StringField("isbn", isbn, Field.Store.YES));
w.addDocument(doc);
}
}
Thanks!
The problem got solved.
Initially, in the build path, only Lucene-core-6.6.0 was added but lucene-queryparser-6.6.0 is a separate jar file that needs to be added separately.
I have many queries with many select fields and some nested entities. This a simplified version of nested entity structure:
public class OuterEntity{
private String name1;
private String name2;
private MiddleEntity middle;
//... get/set..
}
public class MiddleEntity{
private String surname1;
private String surname2;
private InnerEntity inner;
//... get/set..
}
public class InnerEntity{
private String nickname1;
private String nickname2;
//... get/set..
}
All entities have 1:n relationship, so I can write a single long query to get all data. I want to avoid multiple queries to get each single entity separately.
select
o.name1
o.name2
m.surname1
m.surname2
i.nickname1
i.nickname2
from outertable o
join middletable m on m.id=o.middle
join innertable i on i.id=m.inner
I wish to have a RowMapper for this mapping using column names aliases that can get and nest all entity. Maybe I can describe all nesting path with columns alias:
select
o.name1 as name1
o.name2 as name1
m.surname1 as middle_surname1
m.surname2 as middle_surname2
i.nickname1 as middle_inner_nickname1
i.nickname2 as middle_inner_nickname2
from outertable o
join middletable m on m.id=o.middle
join innertable i on i.id=m.inner
Do you think is it possibile? Does jdbctemplate provide something for this need?
I'm not asking to code a new RowMapper for me, I just want to know if exists something or better solution becase I think it is a very common problem
My actual solution is to get entities separately (one query per entity) and map them with BeanPropertyRowMapper. Another solution could be to write a different RowMapper for each query, but I will use this as last chance because I should write many different mapper for a common logic.
ORM frameworks like Hibernate is not an option for me.
I did not find nothing for now, I tried to write a custom BeanWrapper base on BeanPropertyRowMapper soruce.
import java.beans.PropertyDescriptor;
import java.math.BigDecimal;
import java.sql.Date;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.BeanWrapper;
import org.springframework.beans.NotWritablePropertyException;
import org.springframework.beans.PropertyAccessorFactory;
import org.springframework.dao.DataRetrievalFailureException;
import org.springframework.jdbc.core.BeanPropertyRowMapper;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.jdbc.support.JdbcUtils;
/**
* #author tobia.scapin
*
* BeanRowMapper for nesting beans of 1:n entity this uses query aliases to build entity nesting.
* Field names should be exactly the same of bean property, respect cases and do not use underscore for field names
* "id" columnname/alias is used to check if a nested entity should be null.
*
* example:
* select
* a.p1 as property1
* b.id as entityname_id //<-- if this is values is null, the entity will be null
* b.p1 as entityname_property1
* b.p2 as entityname_property2
* c.id as entityname_subentity_id //<-- if this is values is null, the subentity will be null
* c.p1 as entityname_subentity_property1
* from a,b,c
*
* #param <T>
*/
public class NestedBeanAliasRowMapper<T> implements RowMapper<T> {
private static final String NESTING_SEPARATOR = "_";
private static final String NULLIZER_FIELD = "id";
#SuppressWarnings("rawtypes")
private final static List<Class> TYPES;
static{
TYPES=Arrays.asList(new Class[]{ int.class, boolean.class, byte.class, short.class, long.class, double.class, float.class, Boolean.class, Integer.class, Byte.class, Short.class, Long.class, BigDecimal.class, Double.class, Float.class, String.class, Date.class});
}
private Class<T> mappedClass;
private Map<String, PropertyDescriptor> mappedFields;
private Map<String, PropertyDescriptor> mappedBeans;
#SuppressWarnings("rawtypes")
private Map<Class,NestedBeanAliasRowMapper> mappersCache=new HashMap<Class,NestedBeanAliasRowMapper>();
private Map<String,BeanProp> beanproperties=null;
public NestedBeanAliasRowMapper(Class<T> mappedClass) {
initialize(mappedClass);
}
/**
* Initialize the mapping metadata for the given class.
* #param mappedClass the mapped class
*/
protected void initialize(Class<T> mappedClass) {
this.mappedClass = mappedClass;
mappersCache.put(mappedClass, this);
this.mappedFields = new HashMap<String, PropertyDescriptor>();
this.mappedBeans = new HashMap<String, PropertyDescriptor>();
PropertyDescriptor[] pds = BeanUtils.getPropertyDescriptors(mappedClass);
for (PropertyDescriptor pd : pds) {
if (pd.getWriteMethod() != null) {
if(TYPES.contains(pd.getPropertyType()))
this.mappedFields.put(pd.getName(), pd);
else
this.mappedBeans.put(pd.getName(), pd);
}
}
}
#Override
public T mapRow(ResultSet rs, int rowNumber) throws SQLException {
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
List<Integer> cols=new ArrayList<Integer>();
for (int index = 1; index <= columnCount; index++)
cols.add(index);
return mapRow(rs, rowNumber, cols, "", true);
}
#SuppressWarnings({ "unchecked", "rawtypes" })
public T mapRow(ResultSet rs, int rowNumber, List<Integer> cols, String aliasPrefix, boolean root) throws SQLException {
T mappedObject = BeanUtils.instantiate(this.mappedClass);
BeanWrapper bw = PropertyAccessorFactory.forBeanPropertyAccess(mappedObject);
ResultSetMetaData rsmd = rs.getMetaData();
if(rowNumber==0) beanproperties=new HashMap<String,BeanProp>();
for (int index : cols) {
String column = JdbcUtils.lookupColumnName(rsmd, index);
if(aliasPrefix!=null && column.length()>aliasPrefix.length() && column.substring(0, aliasPrefix.length()).equals(aliasPrefix))
column=column.substring(aliasPrefix.length()); //remove the prefix from column-name
PropertyDescriptor pd = this.mappedFields.get(column);
if (pd != null) {
try {
Object value = getColumnValue(rs, index, pd);
if(!root && NULLIZER_FIELD.equals(column) && value==null)
return null;
bw.setPropertyValue(pd.getName(), value);
}
catch (NotWritablePropertyException ex) {
throw new DataRetrievalFailureException("Unable to map column '" + column + "' to property '" + pd.getName() + "'", ex);
}
}else if(rowNumber==0 && column.contains(NESTING_SEPARATOR)){
String[] arr=column.split(NESTING_SEPARATOR);
column=arr[0];
PropertyDescriptor bpd = this.mappedBeans.get(column);
if(bpd!=null){
BeanProp beanprop=beanproperties.get(column);
if(beanprop==null){
beanprop=new BeanProp();
beanprop.setClazz(bpd.getPropertyType());
beanproperties.put(column, beanprop);
}
beanprop.addIndex(index);
}
}
}
if(!beanproperties.isEmpty()) for (String beanname : beanproperties.keySet()) {
BeanProp beanprop=beanproperties.get(beanname);
NestedBeanAliasRowMapper mapper=mappersCache.get(beanprop.getClazz());
if(mapper==null){
mapper=new NestedBeanAliasRowMapper<>(beanprop.getClazz());
mappersCache.put(beanprop.getClazz(), mapper);
}
Object value = mapper.mapRow(rs, rowNumber, beanprop.getIndexes(), aliasPrefix+beanname+NESTING_SEPARATOR, false);
bw.setPropertyValue(beanname, value);
}
return mappedObject;
}
protected Object getColumnValue(ResultSet rs, int index, PropertyDescriptor pd) throws SQLException {
return JdbcUtils.getResultSetValue(rs, index, pd.getPropertyType());
}
public static <T> BeanPropertyRowMapper<T> newInstance(Class<T> mappedClass) {
return new BeanPropertyRowMapper<T>(mappedClass);
}
#SuppressWarnings("rawtypes")
private class BeanProp{
private Class clazz;
private List<Integer> indexes=new ArrayList<Integer>();
public Class getClazz() {
return clazz;
}
public void setClazz(Class clazz) {
this.clazz = clazz;
}
public List<Integer> getIndexes() {
return indexes;
}
public void addIndex(Integer index) {
this.indexes.add(index);
}
}
}
Im trying to import RDF-Triples into OrientDB with help of tinkerpop/blueprints.
I found the basic usage here.
Im now that far:
import info.aduna.iteration.CloseableIteration;
import org.openrdf.model.Statement;
import org.openrdf.model.ValueFactory;
import org.openrdf.sail.Sail;
import org.openrdf.sail.SailConnection;
import org.openrdf.sail.SailException;
import com.hp.hpl.jena.graph.Node;
import com.hp.hpl.jena.graph.Triple;
import com.tinkerpop.blueprints.impls.orient.OrientGraph;
import com.tinkerpop.blueprints.oupls.sail.GraphSail;
import de.hof.iisys.relationExtraction.jena.parser.impl.ParserStreamIterator;
import de.hof.iisys.relationExtraction.neo4j.importer.Importer;
public class ImporterJenaTriples extends Importer {
private OrientGraph graph = null;
private Sail sail = null;
private SailConnection sailConnection = null;
private ValueFactory valueFactory = null;
private Thread parserThread = null;
public ImporterJenaTriples(ParserStreamIterator parser, String databasePath) throws SailException {
this.parser = parser;
this.databasePath = databasePath;
this.initialize();
}
private void initialize() throws SailException {
this.graph = new OrientGraph(this.databasePath);
this.sail = new GraphSail<OrientGraph>(graph);
sail.initialize();
this.sailConnection = sail.getConnection();
this.valueFactory = sail.getValueFactory();
}
public void startImport() {
this.parserThread = new Thread(this.parser);
this.parserThread.start();
try {
Triple next = (Triple) this.parser.getIterator().next();
Node subject = next.getSubject();
Node predicate = next.getPredicate();
Node object = next.getObject();
} catch (SailException e) {
e.printStackTrace();
}
try {
CloseableIteration<? extends Statement, SailException> results = this.sailConnection.getStatements(null, null, null, false);
while(results.hasNext()) {
System.out.println(results.next());
}
} catch (SailException e) {
e.printStackTrace();
}
}
public void stopImport() throws InterruptedException {
this.parser.terminate();
this.parserThread.join();
}
}
What i need to do now is to differ the types of subject, predicate and object
but the problem is i dont know which types they are and how i have to use
the valuefactory to create the type and to add the Statement to my SailConnection.
Unfortunately i cant find an example how to use it.
Maybe someone has done it before and knows how to continue.
I guess you need to convert from Jena object types to Sesame ones and use the
The unsupported project https://github.com/afs/JenaSesame may have some code for that.
But mixing Jena and Sesame seems to make things more complicated - have you consider using the Sesame parser and getting Sesame objects that can go into the SailConnection?
STACKOVERFLOW is my final destination for my questions. Vaadin forum is really quiet and dynamicreports has no forum.
I have a problem integrating dynamicreports which is based on jasperreport with vaadin class named "Embedded". The "Embedded" class needs StreamResource object and all will be ended implementing getStream() function which is, in my case, never get called.
Here is my code:
//
//
//
public void build(Application app) throws IOException, DRException {
final JasperReportBuilder report = DynamicReports.report();
report.addColumn(Columns.column("Item", "item", DataTypes.stringType()));
report.addColumn(Columns.column("Quantity", "quantity", DataTypes.integerType()));
report.addColumn(Columns.column("Unit price", "unitprice", DataTypes.bigDecimalType()));
report.addTitle(Components.text("Getting started"));
report.addPageFooter(Components.pageXofY());
report.setDataSource(createDataSource());
StreamResource.StreamSource resstream = new filePDF(report);
StreamResource ress = new StreamResource(resstream, "abc.pdf", app);
//
ress.setMIMEType("application/pdf");
//
Embedded c = new Embedded("Title", ress);
c.setSource(ress);
c.setMimeType("application/pdf");
c.setType(Embedded.TYPE_BROWSER);
c.setSizeFull();
c.setHeight("800px");
c.setParameter("Content-Disposition", "attachment; filename=" + ress.getFilename());
//
app.getMainWindow().removeAllComponents();
app.getMainWindow().addComponent(c);
}
//
//
//
private JRDataSource createDataSource() {
DataSource dataSource = new DataSource("item", "quantity", "unitprice");
dataSource.add("Notebook", 1, new BigDecimal(500));
dataSource.add("DVD", 5, new BigDecimal(30));
dataSource.add("DVD", 1, new BigDecimal(28));
dataSource.add("DVD", 5, new BigDecimal(32));
dataSource.add("Book", 3, new BigDecimal(11));
dataSource.add("Book", 1, new BigDecimal(15));
dataSource.add("Book", 5, new BigDecimal(10));
dataSource.add("Book", 8, new BigDecimal(9));
return (JRDataSource) dataSource;
}
And this is "filePDF" class:
/**
*
*/
package com.example.postgrekonek;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import net.sf.dynamicreports.jasper.builder.JasperReportBuilder;
import net.sf.dynamicreports.report.exception.DRException;
import net.sf.jasperreports.engine.JRException;
import net.sf.jasperreports.engine.JasperRunManager;
import com.vaadin.Application;
import com.vaadin.terminal.StreamResource;
/**
* #author hehehe
*
*/
public class filePDF implements StreamResource.StreamSource {
private JasperReportBuilder report;
//
public filePDF(final JasperReportBuilder rpt) {
report = rpt;
}
#Override
public InputStream getStream() {
//
ByteArrayOutputStream os = new ByteArrayOutputStream();
//
//os.write(JasperRunManager.runReportToPdf(report.toJasperReport(), new HashMap()));
try {
report.toPdf(os);
try {
os.flush();
} catch (IOException e) {
//
e.printStackTrace();
}
} catch (DRException e) {
//
e.printStackTrace();
}
return new ByteArrayInputStream(os.toByteArray());
}
}
And this is "Datasource" class:
/* Dynamic reports - Free Java reporting library for creating reports dynamically
*
* (C) Copyright 2010 Ricardo Mariaca
*
* http://dynamicreports.sourceforge.net
*
* This library is free software; you can redistribute it and/or modify it
* under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 3 of the License, or
* (at your option) any later version.
*
* This library is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
* or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public
* License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
* USA.
*/
package net.sf.dynamicreports.examples;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import net.sf.jasperreports.engine.JRDataSource;
import net.sf.jasperreports.engine.JRException;
import net.sf.jasperreports.engine.JRField;
/**
* #author Ricardo Mariaca (dynamicreports#gmail.com)
*/
public class DataSource implements JRDataSource {
private String[] columns;
private List<Map<String, Object>> values;
private Iterator<Map<String, Object>> iterator;
private Map<String, Object> currentRecord;
public DataSource(String ...columns) {
this.columns = columns;
this.values = new ArrayList<Map<String, Object>>();
}
public void add(Object ...values) {
Map<String, Object> row = new HashMap<String, Object>();
for (int i = 0; i < values.length; i++) {
row.put(columns[i], values[i]);
}
this.values.add(row);
}
public Object getFieldValue(JRField field) throws JRException {
return currentRecord.get(field.getName());
}
public boolean next() throws JRException {
if (iterator == null) {
this.iterator = values.iterator();
}
boolean hasNext = iterator.hasNext();
if (hasNext) {
currentRecord = iterator.next();
}
return hasNext;
}
}
Maybe this is browser cache issue.
Have you been try with ress.setCacheTime(1)?
For more effective streaming you should look at http://ostermiller.org/convert_java_outputstream_inputstream.html
Shortly add producer thread to handle report output and use circular buffer for host it as input.