Performance issue while listing statements of an inferred model - jena

I am developing an application using Apache Jena to work with RDF triples and OWL ontologies.
The problem
What I am currently trying to do is to get a model from a TDB triplestore, to infer on this model and to find some statements in the inferred model. This is done thanks to the StmtIterator listStatements(Resource s, Property p, RDFNode o) method. And then a really simple while(iter.hasNext()) loop is made to iterate over the statements.
At first glance, it seems to work well but it is really slow.
For my reasonably small model, it takes approximately 5 minutes while it takes only a few milliseconds on the (not inferred) model.
An example with the Pizza ontology
In this example, I am using the Pizza ontology available here. The code looks like :
public class TestInferedModel {
private static final String INPUT_FILE_NAME = "path/to/file";
private static final String URI = "http://www.co-ode.org/ontologies/pizza/pizza.owl#";
private static final Logger logger = Logger.getLogger(TestInferedModel.class);
public static void main(String[] args) {
Model model = ModelFactory.createOntologyModel();
model.read(INPUT_FILE_NAME);
Reasoner reasoner = ReasonerRegistry.getOWLReasoner();
reasoner = reasoner.bindSchema(model);
Model infmodel = ModelFactory.createInfModel(reasoner, model);
logger.debug("Model size : " + model.size() + " and inferred model size : " + infmodel.size());
// prints Model size : 2028 and inferred model size : 4881
StmtIterator iter = infmodel.listStatements();
while (iter.hasNext()) { // <----- Performance issue seems to come from this line
// Operations with the next statement
Statement stmt = iter.nextStatement();
logger.info(stmt);
}
model.close();
}
}
Here, the model size is approximately two thousands triples while the inferred model is made of a bit less than five thousands triples. However, running the code above takes much more time than running the same piece of code after editing StmtIterator iter = infmodel.listStatements(); to StmtIterator iter = model.listStatements();. Plus, when trying to add parameters to restrict the statements, the program seems to be running in an infinite loop.
I tried to add a couple of logger.debug() message to see where the program is wasting so much time and it seems that the issue comes from the while(iter.hasNext()) line.
The question
I thought the listStatements() method is of polynomial (or linear) complexity, not exponential, isn't it ? Is it normal that it takes so much time for the inferred model ? How can I avoid that ?
I'd like to be able to list statements and to manipulate the inferred model without requiring the user to wait ten minutes for an answer...
This issue seems similar to this one. However the answer is not really helping me to understand.

Related

Time Complexity Difference between Two Parsing Implementations Using Global Variables and Return Values

I'm trying to solve the following problem:
A string containing only lower-case letters can be encoded into NUM[encoded string] format. For example, aaa can be encoded into 3[a]. Given an encoded string, find its original string according to the following grammar.
S -> {E}
E -> NUM[S] | STR # NUM[S] means encoded, while STR means not.
NUM -> 1 | 2 | ... | 9
STR -> {LETTER}
LETTER -> a | b | ... | z
Note: in the above grammar {} represents "concatenate 0 or more times".
For example, given the encoded string 3[a2[c]], the result (original string) is accaccacc.
I think this can be parsed by recursive descent parsing, and there are two ways to implement it:
Method I: Let the parsing method to return the result string directly.
Method II: Use a global variable, and each parsing method can just append characters to it.
I'm wondering if the two methods share the same time complexity. Suppose the result string is of length t. Then for method II, I think its time complexity should be O(t) because we read and write every character in the result string exactly once. For method I, however, my intuition was that it could be slower because the same substring can be copied multiple times, depending on the depth of recursions. But I'm not able to figure out the exact time complexity to justify my intuition. Can anyone give a hint?
My first suggestion is that your parser should produce an abstract syntax tree rather than directly interpret the string, no matter whether you choose to write a recursive descent parser, a state-based parser or use a parser generator. This greatly enhances maintainability and allows you perform validation, analyses, and transformations much more easily.
Method I
If I understand you correctly, in Method I you have functions for each grammar construct that return an immutable string, which are then recursively repeated and concatenated. For example, for the top-level concatenation rule
S ::= E*
you would have an interpretation function that looks like this:
string interpretS(NodeS sNode) {
string result = "";
for (int i = 0; i < sNode.Expressions.Length; i++) {
result = result + interpretE(sNode.Expressions[i]);
}
return result;
}
... and similarly for the other rules. It is easy to see that the time complexity of Method I is O(n²) where n is the length of the output. (NB: It makes sense to measure the time complexity in terms of the output rather than the input, since the output length is exponential in the length of the input, and so any interpretation method must have time complexity at least exponential in the input, which is not very interesting.) For example, interpreting the input abcdef requires concatenating a and b, then concatenating the result with c, then concatenating that result with d etc., resulting in 1+2+3+4+5 steps. (See here for a more detailed discussion why repeated string concatenation with immutable strings has quadratic complexity.)
Method II
I interpret your description of Method II like this: instead of returning individual strings which have to be combined, you keep a reference to a mutable structure representing a string that supports appending. This could be a data structure like StringBuilder in Java or .NET, or just a dynamic-length list of characters. The important bit is that appending a string of length b to a string of length a can be done in O(b) (rather than O(a+b)).
Note that for this to work, you don't need a global variable! A cleaner solution would just pass the reference to the resulting structure through (this pattern is called accumulator parameter). So now we would have functions like these:
void interpretS2(NodeS sNode, StringBuilder accumulator) {
for (int i = 0; i < sNode.Expressions.Length; i++) {
interpretE2(sNode.Expressions[i], accumulator);
}
}
void interpretE2(NodeE eNode, StringBuilder accumulator) {
if (eNode is NodeNum numNode) {
for (int i = 0; i < numNode.Repetitions; i++) {
interpretS2(numNode.Expression, accumulator);
}
}
else if (eNode is NodeStr strNode) {
for (int i = 0; i < strNode.Letters.Length; i++) {
interpretLetter2(strNode.Letters[i], accumulator);
}
}
}
void interpretLetter2(NodeLetter letterNode, StringBuilder accumulator) {
accumulator.Append(letterNode.Letter);
}
...
As you stated correctly, here the time complexity is O(n), since at each step exactly one character of the output is appended to the accumulator, and no strings are ever copied (only at the very end, when the mutable structure is converted into the output string).
So, at least for this grammar, Method II is clearly preferable.
Update based on comment
Of course, my interpretation of Method I above is exceedingly naive. A more realistic implementation of the interpretS function would internally use a StringBuilder to concatenate the results from the subexpressions, resulting in linear complexity for the example given above, abcdef.
However, this wouldn't change the worst case complexity O(n²): consider the example
1[a1[b[1[c[1[d[1[e[1[f]]]]]]]]]]
Even the less naive version of Method I would first append f to e (1 step), then append ef to d (+ 2 steps), then append def to c (+ 3 steps) and so on, amounting to 1+2+3+4+5 steps in total.
The fundamental reason for the quadratic time complexity of Method I is that the results from the subexpressions are copied to create the new subresult to be returned.
Time Complexity is estimated by counting the number of elementary operations performed by an algorithm, supposing that each elementary operation takes a fixed amount of time to perform, see here. Of interest is, however, only how fast this number of operations increases, when the size of the input data set increases.
In your case, the size of the input data means the length of the string to be parsed.
I assume by your 1st method you mean that when a NUM is encountered, its argument is processed by the parser completely NUM times. In your example, when „3“ is read from the input string, „a2[c]“ is processed completely 3 times. Processing here means to transverse the syntax tree up to a leave, and append the leave value, here the „c“ to the output string.
I also assume by your 2nd method you mean that when a NUM is encountered, its argument is only evaluated once and all intermediate results are stored and re-used. In your example, when „3“ is read from the input string and stored, „a“ is read from the input string and stored, „2[c]“ is processed, i.e. „2“ is read from the input string and stored, and finally „c“ is processed. „c“ is due to the stored „2“ combined to „cc“, and due to the stored „a“ combined to „acc“. This is due to the stored „3“ combined then to „accaccacc“, and „accaccacc“ is output.
The question now is, what is the elementary operation that is relevant to the time complexity? My feeling is that in the 1st case, the stack operations during transversal of the syntax tree are important, while in the 2nd case, string copying operations are important.
Strictly speaking, one can thus not compare the time complexities of both algorithms.
If you are, however, interested in run times instead of time complexities, my guess is that the stack operations take more time than string copying, and that then method 2 is preferable.

jenetics: Fitness: Assign an object to the determination of fitness & complex fitness result type

I am using an IntegerChromosome which is an encoding for the actual problem. To determine the fitness I have to decode from the IntegerChromosome, or more the IntegerGene values, to the actual base type for the optimization problem. So, e.g. IntegerGene value 0 means an instance of type Class with values A, 1 means an instance of type Class with values B and so on. So, 01233210 would translate to ABCDDCBA. Only the latter I can evaluate. I get this information at runtime in a class FitnessInput.
Therefore I need to pass FitnessInput to the determination of the fitness function. Looking at a simple example I found that the determination of the fitness, in the eval() method in the example, takes place in a static method. Is there a concept and a related example how to pass runtime objects to the determination of the fitness rather than overwriting a static variable in the class where fitness() is implemented?
A second question related to the problem of fitness determination. I found examples where simple data types Integer, Double are used for the determination of the fitness. While this is of course reasonable, I would like to return an object to the user for the best phenotype, which contains all intermediate results for the determination of its fitness. I guess this should be possible if my return object implements Comparable. How can I make use, e.g., of the Function interface for that?
You might have a look at the Codec interface. This interface is responsible for converting the objects of your problem domain into a Genotype. If I understand your problem correctly, a possible encoding might look like this:
static final List<String> OBJECTS = List.of("A", "B", "C", "D");
static final int LENGTH = 10;
static final Codec<List<String>, IntegerGene> CODEC = Codec.of(
// Create the Genotype the GA is working with.
Genotype.of(IntegerChromosome.of(0, OBJECTS.size() - 1, LENGTH)),
// Convert the Genotype back to your problem domain.
gt -> gt.chromosome().stream()
.map(IntegerGene::allele)
.map(OBJECTS::get)
.collect(Collectors.toList())
);
// Calculate the fitness function in terms of your problem domain.
static double fitness(final List<String> objects) {
return 0;
}
static final Engine<IntegerGene, Double> ENGINE = Engine
.builder(Main::fitness, CODEC)
.build();
The Codec creates a Genotype which consists of an IntegerChromosome of length 10 and converts it back to your problem domain.
I'm not sure if I understood your second question correctly. But if you want to collect the intermediate results as well, you can use the Stream::peek method.
public static void main(final String[] args) {
final List<Phenotype<IntegerGene, Double>> intermediateResults = new ArrayList<>();
final var bestPhenotype = ENGINE.stream()
.limit(Limits.bySteadyFitness(25))
.peek(er -> intermediateResults.add(er.bestPhenotype()))
.collect(EvolutionResult.toBestPhenotype());
}

Naive Bayes - no samples for class label 1

I am using accord.net. I have successfully implemented the two Decision tree algorithms ID3 and C4.5, now I am trying to implement the Naive Bays algorithm. While there is a lot of sample code on the site, most of it seems to be out of date, or have various issues.
The best sample code I have found on the site so far has been here:
http://accord-framework.net/docs/html/T_Accord_MachineLearning_Bayes_NaiveBayes_1.htm
However, when I try and run that code against my data I get:
There are no samples for class label 1. Please make sure that class
labels are contiguous and there is at least one training sample for
each label.
from line 228 of this file:
https://github.com/accord-net/framework/blob/master/Sources/Accord.MachineLearning/Tools.cs
when I call
learner.learn(inputs, outputs) in my code.
I have already run into the Null bugs that accord has when implementing the other two regression trees, and my data has been sanitized against that issue.
Does any accord.net expert have an idea what would trigger this error?
An excerpt from my code:
var codebook = new Codification(fulldata, AllAttributeNames);
/*
* Get list of all possible combinations
* Status software blows up if it encounters a value it has not seen before.
*/
var attributList = new List<IUnivariateFittableDistribution>();
foreach (var attr in DeciAttributeNames)
{
{
/*
* By default we'll use a standard static list of values for this column
*/
var cntLst = codebook[attr].NumberOfSymbols;
// no decisions can be made off of the variable if it is a constant value
if (cntLst > 1)
{
KeptAttributeNames.Add(attr);
attributList.Add(new GeneralDiscreteDistribution(cntLst));
}
}
}
var data = fulldata.Copy(); // this is a datatable
/*
* Translate our training data into integer symbols using our codebook
*/
DataTable symbols = codebook.Apply(data, AllAttributeNames);
double[][] inputs = symbols.ToJagged<double>(KeptAttributeNames.ToArray());
int[] outputs = symbols.ToArray<int>(OutAttributeName);
progBar.PerformStep();
/*
* Create a new instance of the learning algorithm
* and build the algorithm
*/
var learner = new NaiveBayesLearning<IUnivariateFittableDistribution>()
{
// Tell the learner how to initialize the distributions
Distribution = (classIndex, variableIndex) => attributList[variableIndex]
};
var alg = learner.Learn(inputs, outputs);
EDIT: After further experimentation, it seems as though this error only occurs when I am processing a certain number of rows. If I process 60 rows or less than I am fine, if I process 500 rows or more then I am fine. But in between that range I throw this error. Depending on the amount of data I choose, the index number in the error message can change, I have seen it range from 0 to 2.
All the data is coming from the same sql server datasource, the only thing I am adjusting is the Select Top ### portion of the query.
You will receive this error in multi-class scenarios when you have defined a label that does not have any sample data. With a small data set your random sampling may by chance exclude all observations with a given label.

How to share the same index among multiple dask arrays

I'm trying to build a dask-based ipython application, that holds a meta-class which consists of some sub-dask-arrays (which are all shaped (n_samples, dim_1, dim_2 ...)) and should be able to sector the sub-dask-arrays by its getitem operator.
In the getitem method, I call the da.Array.compute method (the code is still in it's very early state), so I would be able to iterate batches of the sub-arrays.
def MetaClass(object):
...
def __getitem__(self, inds):
new_m = MetaClass()
inds = inds.compute()
for name,var in vars(self).items():
if isinstance(var,da.Array):
try:
setattr(new_m, name, var[inds])
except Exception as e:
print(e)
else:
setattr(new_m, name, var)
return new_m
# Here I construct the meta-class to work with some directory.
m = MetaClass('/my/data/...')
# m.type is one of the sub-dask-arrays
m2 = m[m.type==2]
It works as expected, and I get the sliced arrays, but as a result I get a huge memory consumption, and I assume that in the background the mechanism of dask is copying the index for each sub-dask-array.
My question is, how do I achieve the same results, without using so much memory?
(I tried not to "compute" the "inds" in getitem, but then I get nan shaped arrays, which can not be iterated, which is a must for the application)
I have been thinking about three possible solutions that I'd be happy to be advised which of them is the "right" one for me. (or to get another solution which I haven't thought of):
To use a Dask DataFrame, which I'm not sure how to fit multidimensional-dask-arrays in (would really appreciate some help or even a link that explains how to deal with multidimensional arrays in dd).
To forget about the entire MetaClass, and to use one dask-array with a nasty dtype (something like [("type",int,(1,)),("images",np.uint8,(1000,1000))]), again, I'm not familiar with this and would really appreciate some help with that (tried to google it.. it's a bit complicated..)
To share the index as a global inside the calling function (getitem) with property and its get-function-mechanism (https://docs.python.org/2/library/functions.html#property). But the big downside here is that I lose the types of the arrays (big down for representation and everything that needs anything but the data itself).
Thanks in advance!!!
One can use the sub-arrays.map_blocks with a shared function that holds the indices in its memory.
Here is an example:
def bool_mask(arr, block_info=None):
from_ind,to_ind = block_info[0]["array-location"][0]
return arr[inds[from_ind:to_ind]]
def getitem(var):
original_chunks = var.chunks[0]
tmp_inds = np.cumsum([0]+list(original_chunks))
from_inds = tmp_inds[:-1]
to_inds = tmp_inds[1:]
new_chunks_0 = np.array(list(map(lambda f,t:inds[f:t].sum(),from_inds,to_inds)))
new_chunks = tuple([tuple(new_chunks_0.tolist())] + list(var.chunks[1:]))
return var.map_blocks(bool_mask,dtype=var.dtype,chunks=new_chunks)

Design alternatives to extending object with interface

While working through Expert F# again, I decided to implement the application for manipulating algebraic expressions. This went well and now I've decided as a next exercise to expand on that by building a more advanced application.
My first idea was to have a setup that allows for a more extendible way of creating functions without having to recompile. To that end I have something like:
type IFunction =
member x.Name : string with get
/// additional members omitted
type Expr =
| Num of decimal
| Var of string
///... omitting some types here that don't matter
| FunctionApplication of IFunction * Expr list
So that say a Sin(x) could be represented a:
let sin = { new IFunction() with member x.Name = "SIN" }
let sinExpr = FunctionApplication(sin,Var("x"))
So far all good, but the next idea that I would like to implement is having additional interfaces to represent function of properties. E.g.
type IDifferentiable =
member Derivative : int -> IFunction // Get the derivative w.r.t a variable index
One of the ideas the things I'm trying to achieve here is that I implement some functions and all the logic for them and then move on to the next part of the logic I would like to implement. However, as it currently stands, that means that with every interface I add, I have to revisit all the IFunctions that I've implemented. Instead, I'd rather have a function:
let makeDifferentiable (f : IFunction) (deriv : int -> IFunction) =
{ f with
interface IDifferentiable with
member x.Derivative = deriv }
but as discussed in this question, that is not possible. The alternative that is possible, doesn't meet my extensibility requirement. My question is what alternatives would work well?
[EDIT] I was asked to expand on the "doesn't meet my extenibility requirement" comment. The way this function would work is by doing something like:
let makeDifferentiable (deriv : int -> IFunction) (f : IFunction)=
{ new IFunction with
member x.Name = f.Name
interface IDifferentiable with
member x.Derivative = deriv }
However, ideally I would keep on adding additional interfaces to an object as I add them. So if I now wanted to add an interface that tell whether on function is even:
type IsEven =
abstract member IsEven : bool with get
then I would like to be able to (but not obliged, as in, if I don't make this change everything should still compile) to change my definition of a sine from
let sin = { new IFunction with ... } >> (makeDifferentiable ...)
to
let sin = { new IFunction with ... } >> (makeDifferentiable ...) >> (makeEven false)
The result of which would be that I could create an object that implements the IFunction interface as well as potentially, but not necessarily a lot of different other interfaces as well; the operations I'd then define on them, would potentially be able to optimize what they are doing based on whether or not a certain function implements an interface. This will also allow me to add additional features/interfaces/operations first without having to change the functions I've defined (though they wouldn't take advantage of the additional features, things wouldn't be broken either.[/EDIT]
The only thing I can think of right now is to create a dictionary for each feature that I'd like to implement, with function names as keys and the details to build an interface on the fly, e.g. along the lines:
let derivative (f : IFunction) =
match derivativeDictionary.TryGetValue(f.Name) with
| false, _ -> None
| true, d -> d.Derivative
This would require me to create one such function per feature that I add in addition to one dictionary per feature. Especially if implemented asynchronously with agents, this might be not that slow, but it still feels a little clunky.
I think the problem that you're trying to solve here is what is called The Expression Problem. You're essentially trying to write code that would be extensible in two directions. Discriminated unions and object-oriented model give you one or the other:
Discriminated union makes it easy to add new operations (just write a function with pattern matching), but it is hard to add a new kind of expression (you have to extend the DU and modify all code
that uses it).
Interfaces make it easy to add new kinds of expressions (just implement the interface), but it is hard to add new operations (you have to modify the interface and change all code that creates it.
In general, I don't think it is all that useful to try to come up with solutions that let you do both (they end up being terribly complicated), so my advice is to pick the one that you'll need more often.
Going back to your problem, I'd probably represent the function just as a function name together with the parameters:
type Expr =
| Num of decimal
| Var of string
| Application of string * Expr list
Really - an expression is just this. The fact that you can take derivatives is another part of the problem you're solving. Now, to make the derivative extensible, you can just keep a dictionary of the derivatives:
let derrivatives =
dict [ "sin", (fun [arg] -> Application("cos", [arg]))
... ]
This way, you have an Expr type that really models just what an expression is and you can write differentiation function that will look for the derivatives in the dictionary.

Resources