catch all the inferences generated by pellet reasoner - ontology

I have a problem when executing the reasoner in my application, I can't capture the inferences generated by the reasoner.
It's normally performed, the printClassTree shows me that inferences were found, but the result OWLOntology doesn't have inferences.
The current code:
com.clarkparsia.pellet.owlapiv3.PelletReasoner reasoner = PelletReasonerFactory.getInstance().createReasoner(ontology);
reasoner.getKB().realize();
reasoner.getKB().printClassTree();
What should I change? How can I capture the axioms resulting from the inference reasoner?

The following code works fine and here "inferredOnotology" contains the base ontology and as well as the inferred results.
Please note that this code was tested using pellet 2.1 or 2.2 version, not sure about the latest version of pellet.
OWLOntology inferredOntology;
// Create Reasoner
OWLReasonerFactory reasonerFactory = new PelletReasonerFactory();
OWLReasoner reasoner = reasonerFactory.createReasoner(manager);
// Load the ontologies into the reasoner.
Set<OWLOntology> importsClosure = manager.getImportsClosure(inferredOntology);
reasoner.loadOntologies(importsClosure);
// Reason!
reasoner.classify();
InferredOntologyGenerator iog = new InferredOntologyGenerator(reasoner);
iog.fillOntology(manager, inferredOntology);

I'm using this import
import com.clarkparsia.pellet.owlapiv3.PelletReasoner
with Pellet 2.3.0
I'm declaring like this:
PelletReasoner razonador;
and initializing with the ontology with this:
razonador=com.clarkparsia.pellet.owlapiv3.PelletReasonerFactory.getInstance().createReasoner(ont)
where ont is the ontology and I'm just using this to classify:
razonador.getKB().classify();
Hope it helps!

Related

Where is MobyLCPSolver?

Where is MobyLCPSolver?
ImportError: cannot import name 'MobyLCPSolver' from 'pydrake.all' (/home/docker/drake/drake-build/install/lib/python3.8/site-packages/pydrake/all.py)
I have the latest Drake and cannot import it.
Can anyone help?
As of pydrake v1.12.0, the MobyLcp C++ API is not bound in Python.
However, if you feed an LCP into Solve() then Drake can choose Moby to solve it. You can take advantage of this to create an instance of MobyLCP:
import numpy as np
from pydrake.all import (
ChooseBestSolver,
MakeSolver,
MathematicalProgram,
)
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddLinearComplementarityConstraint(np.eye(2), np.array([1, 2]), x)
moby_id = ChooseBestSolver(prog)
moby = MakeSolver(moby_id)
print(moby.SolverName())
# The output is: "Moby LCP".
# The C++ type of the `moby` object is drake::solvers::MobyLCP.
That only allows for calling Moby via the MathematicalProgram interface, however. To call any MobyLCP-specific C++ functions like SolveLcpFastRegularized, those would need to be added to the bindings code specifically before they could be used.
You can file a feature request on the Drake GitHub page when you need access to C++ classes or functions that aren't bound into Python yet, or even better you can open a pull request with the bindings that you need.

FastAI Question on data loading using TextList

My end-goal to implement ULMFit using FastAI to predict disaster tweets(as a part of this Kaggle competition). What I'm trying to do is read the tweets from a Dataframe. But for reasons unknown to me, I'm stuck at the data loading stage. I'm simply unable to do so using the below method -
from fastai.text.all import *
train= pd.read_csv('../input/nlp-getting-started/train.csv')
dls_lm = (TextList.from_df(path,train,cols='text',is_lm=True)
.split_by_rand_pct(0.1)
#.label_for_lm()
.databunch(bs=64))
This line throws - NameError: name 'TextList' is not defined.
I'm able to work around this problem with the below code -
dls_lm = DataBlock(
blocks=TextBlock.from_df('text', is_lm=True),
get_x=ColReader('text'),
splitter=RandomSplitter(0.1)
# using only 10% of entire comments data for validation inorder to learn more
)
dls_lm = dls_lm.dataloaders(train, bs=64, seq_len=72)
Why does this work and not the previous method?
Notebook Link for reference.
Which version of fastai are you running?
import fastai
print(fastai.__version__)
TextList class is from FastAI v1, but it seems to me your import path is for Fastai v2, and in v2, TextList is changed with https://docs.fast.ai/text.data.html#TextBlock (thats why it's working with the Datablock part wich is the good way to handle this)

PyDrake ComputePointPairPenetration() kills kernel

In calling ComputePointPairPenetration() from a QueryObject in Drake in Python in a Jupyter Notebook environment, ComputePointPairPenetration() reliably kills the kernel. I'm not sure what's causing it and couldn't figure out how to get any error message.
In case it's relevant I'm running pydrake locally on a Mac.
Here is relevant code:
builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=0.00001)
file_name = FindResource("models/robot.urdf")
model = Parser(plant).AddModelFromFile(file_name)
file_name = FindResource("models/object.urdf")
object_model = Parser(plant).AddModelFromFile(file_name)
plant.Finalize()
diagram = builder.Build()
# Run simulation...
# Get geometry info from scene graph
context = scene_graph.AllocateContext()
q_obj = scene_graph.get_query_output_port().Eval(context)
q_obj.ComputePointPairPenetration()
Edit:
#Sherm's comment fixed my problem :) Thank you so much!
For reference:
diagram_context = diagram.CreateDefaultContext()
scene_graph_context = scene_graph.GetMyContextFromRoot(diagram_context)
q_obj = scene_graph.get_query_output_port().Eval(scene_graph_context)
q_obj.ComputePointPairPenetration()
You created a local Context for scene_graph. Instead you want the full diagram context so that the ports are connected up properly (e.g. scene_graph has an input port that receives poses from MultibodyPlant). So the above should work if you ask the Diagram to create a Context, then ask for the SceneGraph subcontext for the calls you have above, rather than creating a standalone SceneGraph context.
This lets you extract the fully-connected subcontext:
scene_graph_context = scene_graph.GetMyContextFromRoot(diagram_context)
FTR Here's a similar formulation in a Drake Python unittest:
TestPlant.test_scene_graph_queries
Note that this takes an alternate route (using diagram.GetMutableSubsystemContext instead of scene_graph.GetMyContextFromroot), namely because it's doing scalar-type conversion as well.
If you're curious about scalar-type conversion (esp. if you're going to be doing optimization, e.g. needing AutoDiffXd), please see:
Drake C++ API: System Scalar Conversion Overview
Drake Python API: pydrake.systems.scalar_conversion
Additionally, here are examples of scalar-converting both MultibodyPlant and SceneGraph for testing InverseKinematics constraint classes:
inverse_kinematics.py: TestConstraints

nashorn CompiledScript graalvm equivalent

I have a fairly big file that needs to often be evaluated,
with nashorn I used to do something like that :
CompiledScript compiledScript = ((Compilable) engine).compile(text);
and later on, I could call many times the following :
Context context = new SimpleScriptContext();
compiledScript.eval(context);
this was quite fast.
Using the new Polyglot API, I do :
Source source = Source.newBuilder("js", myFile).build();
then :
Context context = Context.newBuilder("js").option("js.nashorn-compat", "true").build();
context.eval(source)
Using jmh, I have a big performance difference between the two
Benchmark Mode Cnt Score Error Units
JmhBenchmark.testEvalGraal avgt 5 42,855 ± 11,118 ms/op
JmhBenchmark.testEvalNashorn avgt 5 2,739 ± 1,101 ms/op
If I do the eval on the same context, it is working properly, but I don't want to have a shared context between two consecutive eval (unless the concept of Context of Graal is not the same as the one from Nashorn).
To reproduce your ScriptEngine setup with GraalVM, you should re-use the same Engine (org.graalvm.polyglot.Engine) and .close() the context after use:
Source source = Source.newBuilder("js", myFile).build();
Engine engine = Engine.create();
and later:
Context context = Context.newBuilder("js")
.engine(engine)
.option("js.nashorn-compat", "true").build();
context.eval(source);
context.close();
Quoting the Context.Builder.engine documentation:
Explicitly sets the underlying engine to use. By default, every context has its own isolated engine. If multiple contexts are created from one engine, then they may share/cache certain system resources like ASTs or optimized code by specifying a single underlying engine.

What is the replacement for newEmbeddedDatabaseBuilder function?

I would like to work with Neo4j packages for java.
I see that the function newEmbeddedDatabaseBuilder is deprecated.
What is the best way to work now with Neo4j using java code?
thanks
In Neo4j 3.0, you'll use the GraphDatabaseFactory-
graphDb = new GraphDatabaseFactory().newEmbeddedDatabase( DB_PATH );
The Neo4j Java manual is available here: http://neo4j.com/docs/java-reference/current/#tutorials-java-embedded

Resources