Configuring Jena Fuseki + inference and TDB? - jena

I am new to Jenna TDB and Fuseki. I would like to load Lehigh University Benchmark (LUBM) data generated with their data generator (ver.1.7) in to Fuseki. This is about 400 .owl files. used the following Configuration file, that comes with Fuseki for inferencing:
<#service1> rdf:type fuseki:Service ;
fuseki:name "inf" ; # http://host/inf
fuseki:serviceQuery "sparql" ; # SPARQL query service
#fuseki:serviceUpdate "update" ;
fuseki:serviceReadWriteGraphStore "data" ;
# A separate read-only graph store endpoint:
fuseki:serviceReadGraphStore "get" ;
fuseki:dataset <#dataset> ;
.
<#dataset> rdf:type ja:RDFDataset ;
ja:defaultGraph <#model_inf> ;
.
<#model_inf> a ja:InfModel ;
ja:baseModel <#tdbGraph> ;
ja:reasoner [
ja:reasonerURL <http://jena.hpl.hp.com/2003/OWLFBRuleReasoner>
] .
<#tdbDataset> rdf:type tdb:DatasetTDB ;
tdb:location "myDB" ;
tdb:unionDefaultGraph true ;
.
<#tdbGraph> rdf:type tdb:GraphTDB ;
tdb:dataset <#tdbDataset> .
Fuseki starts without any issues. However when I execute the following command:
./s-put http://localhost:3030/inf/data default ~/Owl/univ-bench.owl
I get the an error:405 HTTP method PUT is not supported by this URL http://localhost:3030/inf/data?default
I have couple of questions:
1.The update in the config file is clearly not disabled, so why do I get this message.
2.In order to load all the 400 .owl file as one graph apparently I have to disable the update and enable tdb:unionDefaultGraph true(This is mentioned in the config file that came with Fuseki) if that is the case how on earth am I suppose to load the data to Fuseki.
Please let me know what am I missing here and how I can do this correctly.
Thanks in advance for the help.
Edit: I found out that you will need to add the following:
fuseki:serviceReadWriteGraphStore "data" ;
# A separate read-only graph store endpoint:
fuseki:serviceReadGraphStore "get" ;
in order to be able to use s-put to load data, however every time I add a new file it overwrites the data from the previous file and therefore the inferencing doesn't work. What did I do wrong here? How do I load the data correctly that all the files are loaded to the same graph and inferencing work?
Edit
So digging more in to this problem I found out that there are two ways to load the data.
you can add the following where you define the model in the config file:
ja:content [ja:externalContent <file://// Path_to_owl_file >] ;
So for me I added it under <#model_inf> a ja:InfModel ; However, if you have 400 files that will be really tedious.
You can separately loaded the data using tdbloader2 and point the config file to the directory that the tdbload builds as a database. Which is also described here
$ tdbloader2 --loc tdb PATH_TO_DIR_or_OWL_Files
The issue currently is that when I run a simple query for instance the following query I get a Out of memory error.
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX ub: <http://cs.uga.edu#>
SELECT *
WHERE
{
?X rdf:type ub:GraduateStudent .
?X ub:takesCourse <http://www.Department0.University0.edu/GraduateCourse0>
}
I increased the memory for Fuseki-Server (int the server script) to up to 5GB and still get a out of memory error for this simple query. Any idea why that might be happening?

s-put does a PUT - which is defined to be a "replace contents".
Use s-post to add to a graph.
LUBM is sufficiently simple in structure that (1) it is not very realistic and (2) inference can be applied to each university alone and the data loaded so at query time, it has all been expanded.

Related

Microsoft.Jet.OLEDB.4.0 , JOIN two MDB Files

I am having trouble figuring out the correct Syntax for the File Path
SELECT c1.Produkt,c2.Name
from Reservation c1
left join [Articles.mdb].[Articles] c2 on c1.Produkt=c2.Produkt
group by c1.Produkt,c2.Name
It now searches for the Articles.mdb inside the Application Folder . I would like to specify the path , for example c:\database\articles.mdb . But unfortunately I cannot figure it out how to do it.
I tried [c:\database\articles.mdb] and ['c:\database\articles.mdb'] , either get Incorrect Parameter, or Incorrect Filename .
Please help.
UPDATE :
after removing Parameter Check from TADOQuery . And entering the text like this :
left join [c:\Articles.mdb].[Articles] c2 on c1.Produkt=c2.Produkt
It works.

How to fix custom function class not registered in apache jena fuseki error?

I need a custom filter function in apache-jena-fuseki. I tried adding custom function class name to config.ttl. I added function class files to class path. But it's always throwing error that function is not registered.
Can anyone please share a detailed approach I can try or some documentation? Desperately need it.
Added following line To Configuration File
[] ja:loadClass "org.apache.jena.sparql.function.library.function" .
Class file is in folder /home/user/custom_functions/
Class file package name = org.apache.jena.sparql.function.library.
Java command to launch fuseki server is
java -cp /home/user/custom_functions/function.class:/home/user/apache-jena-4.5.0/lib-src/*:/home/user/apache-jena-4.5.0/lib/* -jar fuseki-server.jar
Function takes one argument.
When I run query, it gives me error log that function has not registered FunctionFactory.
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX java: <http://www.w3.org/2007/uwa/context/java.owl#>
PREFIX f: <java:org.apache.jena.sparql.function.library.>
SELECT ?s ?o {
?s rdfs:label ?o .
FILTER (f:function(?o) ) .
}

Perform an INSERT in Jena Fuseki with SPARQL gem (Ruby)

So I'm developing an API in Rails and using Jena Fuseki to store triples, and right now I'm trying to perform an INSERT in a named graph. The query is correct, since I ran it on Jena and worked perfectly. However, no matter what I do when using the Rails CLI, I keep getting the same error message:
SPARQL::Client::MalformedQuery: Error 400: SPARQL Update: No 'update=' parameter
I've created a method that takes the parameters of the object I'm trying to insert, and specified the graph where I want them.
def self.insert_taxon(uri, label, comment, subclass_of)
endpoint = SPARQL::Client.new("http://app.talkiu.com:8030/talkiutest/update")
query =
"PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix gpc:<http://www.ebusiness-unibw.org/ontologies/pcs2owl/gpc/>
prefix tk: <www.web-experto.com.ar/ontology#>
INSERT DATA {
GRAPH <http://app.talkiu.com:8030/talkiutest/data/talkiu> {
<#{uri}> a owl:Class.
<#{uri}> rdfs:label '#{label}'#es .
<#{uri}> rdfs:comment '#{comment}' .
<#{uri}> rdfs:subClassOf <#{subclass_of}> .
}
}"
resultset = endpoint.query(query)
end
As you can see, I'm using the UPDATE endpoint. Any ideas? Thanks in advance
Well... Instead of endpoint.query, I tried
resultset = endpoint.update(query)
and worked. Method returned
<SPARQL::Client:0x2b0158a050e4(http://app.talkiu.com:8030/talkiutest/update)>
and the data is showing up in my database and graph. Hope this helps anyone with the same problem.

Boost-build - dependency on subproject target

I have a jamfile-based project where one of the build steps compiles a custom tool (called 'codegen') which I want to use in a later build step. The codegen tool is built in projects/codegen/Jamfile.jam relative to the root, and the executable target is ultimately declared with the line:
install codegen-tool : $(full-exe-target) : <location>$(install-dir) ;
In Jamroot.jam, I have the following:
rule codegen ( target : source : properties * )
{
COMMAND on $(target) = projects/codegen//codegen-tool ;
DEPENDS $(target) : projects/codegen//codegen-tool ;
}
actions codegen bind COMMAND
{
$(COMMAND) $(<) $(>)
}
project.load projects/codegen//codegen-tool ;
local codegen-input = <blah> ;
local codegen-output = <blah> ;
make $(codegen-output) : $(codegen-input) : #codegen ;
alias codegen-output : $(codegen-output) ;
When I run the command "b2 codegen-output", I get the error:
don't know how to make project projects/codegen//codegen-tool
But running the command "b2 projects/codegen//codegen-tool" is successful. How come I'm not able to reference the codegen-tool target from Jamroot.jam?
The key problem you are having is that the codegen rule of the tool:
rule codegen ( target : source : properties * )
{
COMMAND on $(target) = projects/codegen//codegen-tool ;
DEPENDS $(target) : projects/codegen//codegen-tool ;
}
Are to the meta-target instead of a real target (aka a file-target) generated from building the codegen-tool meta-target. The "easy" way to get such tool dependencies to work is to use a feature on your make target to inform it of what the built full path to the tool is. And the feature you use for that is a "dependency" feature. For example you would add in your jamroot something like:
import feature ;
feature.feature codegen : : dependency free ;
And set and use that feature to refer to the codegent-tool:
project : requirements <codegen>projects/codegen//codegen-tool ;
There's not enough information in your question to answer with a full example.. But you should consult the fully working built_tool example for how to get the details of how using the dependency feature works for the use case of custom built tools.

Working with code outside of the Factor source tree

I'm trying to get started playing with factor.
So far, I've:
downloaded the OSX disk image
copied the factor directory into $INSTALL/factor
started up the debugger by running $INSTALL/factor/factor
Which seems to be running great.
Following the instructions for writing your first factor program, I noticed that scaffold-vocab generated files in my $INSTALL/factor/work directory. Which I can use for now, but in general, I like to keep a separate $INSTALL directory-tree and $CODE directory-tree.
So I'm trying to follow the instructions from the "Working with code outside of the Factor directory tree" documentation to add other directories to the path used to load code into the factor executable, but I'm not having much luck.
First, I tried to set a FACTOR_ROOTS environment variable:
% export FACTOR_ROOTS=.:$CODE/Factor:$INSTALL/factor
% $INSTALL/factor/factor
( scratchpad ) "work" resource-path .
"/usr/local/src/factor/work"
( scratchpad ) ^D
Then, I tried to create a ~/.factor-roots file
% echo . > ~/.factor-roots
% echo $CODE/Factor >> ~/.factor-roots
% echo $INSTALL/factor >> ~/.factor-roots
% $INSTALL/factor/factor
( scratchpad ) "work" resource-path .
"/usr/local/src/factor/work"
( scratchpad ) ^D
Then I checked to see if it should be ./.factor-roots instead:
% mv ~/.factor-roots .
% $INSTALL/factor/factor
( scratchpad ) "work" resource-path .
"/usr/local/src/factor/work"
( scratchpad ) ^D
Lastly, I tried adding it manually:
% $INSTALL/factor/factor
( scratchpad ) "." add-vocab-root
( scratchpad ) "$CODE/Factor" add-vocab-root ! no, I didn't actually use an environment variable here :)
( scratchpad ) "work" resource-path .
"/usr/local/src/factor/work"
( scratchpad ) ^D
It seems I'm missing something fundamental here.
How do I write code outside of the $INSTALL/factor directory-tree and use it in factor? How can I tell scaffold-vocab to build scaffolding in my $CODE/Factor directory?
Ok, I was able to work out what I was doing wrong thanks to the earnest help of slava and erg on #concatenative.
Simply put, resource-path is not a way to test your factor roots. Like the docs say it "resolve[s] a path relative to the Factor source code location."
A more effective test is simply vocab-roots get, which will fetch the current list of vocab roots.
"/path/to/wherever" add-vocab-root will add /path/to/wherever to your list of vocab-roots, and allow you to do "/path/to/wherever" "project" scaffold-vocab so you can build scaffolding in the desired location.
As erg said:
i usually make another word, like
: scaffold-games ( vocab -- ) [ "/home/erg/games" ] dip scaffold-vocab ;
"minesweeper" scaffold-games

Resources