jenetics: Set EvolutionStream limit outside of stream() - java-stream

There are several possibilities in jenetics to set termination limits to EvolutionStream, see the documentation.
The limits are usually applied directly on the stream, e.g.
Phenotype<IntegerGene,Double> result = engine.stream()
.limit(Limits.bySteadyFitness(10))
.collect(EvolutionResult.toBestPhenotype());
or
Phenotype<IntegerGene,Double> result = engine.stream()
.limit(Limits.byFixedGeneration(10))
.collect(EvolutionResult.toBestPhenotype());
or in combination, see example:
Phenotype<IntegerGene,Double> result = engine.stream()
.limit(Limits.bySteadyFitness(10))
.limit(Limits.byFixedGeneration(10))
.collect(EvolutionResult.toBestPhenotype());
In my optimization problem, I want to let the user decide which limits to assign to the problem. I do not know the limit setup in advance. It might be multiple limits. Therefore, I have to assign the limit types at runtime.
I tried to create a EvolutionStream object by
EvolutionStream<IntegerGene, Double> evolutionStream = engine.stream();
and assign the limits on the evolutionStream:
Stream<EvolutionResult<IntegerGene, Double>> limit = evolutionStream.limit(Limits.byFixedGeneration(10));
The result is a Stream, which does not know the EvolutionStream specific limit methods. Thus, I can not apply it in case multiple limits are defined. Trying to cast
evolutionStream = (EvolutionStream<IntegerGene, Double>)evolutionStream.limit(Limits.byFixedGeneration(10));
results in an error:
java.lang.ClassCastException: class java.util.stream.SliceOps$1 cannot be cast to class io.jenetics.engine.EvolutionStream (java.util.stream.SliceOps$1 is in module java.base of loader 'bootstrap'; io.jenetics.engine.EvolutionStream is in unnamed module of loader 'app')
So, is there a way to properly apply multiple limits outside the stream builder?

The EvolutionStream.limit(Predicate) method does return an EvolutionStream.
EvolutionStream<IntegerGene, Double> stream = engine.stream();
stream = stream
.limit(Limits.byFixedGeneration(10))
.limit(Limits.bySteadyFitness(5))
.limit(Limits.byExecutionTime(Duration.ofMillis(100)));
So your given examples look good and should work. But the EvolutionStream.limit(Predicate) method is the only method which gives you back an EvolutionStream.
An alternative would be that your method, which initializes the EvolutionStream, takes the Predicates from outside.
#SafeVarargs
static EvolutionStream<IntegerGene, Double>
newStream(final Predicate<? super EvolutionResult<IntegerGene, Double>>... limits) {
final Engine<IntegerGene, Double> engine = Engine
.builder(a -> a.gene().allele().doubleValue(), IntegerChromosome.of(0, 100))
.build();
EvolutionStream<IntegerGene, Double> stream = engine.stream();
for (var limit : limits) {
stream = stream.limit(limit);
}
return stream;
}
final var stream = newStream(
Limits.byFixedGeneration(100),
Limits.byExecutionTime(Duration.ofMillis(1000)),
Limits.bySteadyFitness(10)
);

Related

How to get the result using neo4j-driver functions as same as the result from evaluate() function in py2neo?

def get_nlg(graph_query):
driver = Graph("neo4j://localhost:7687", auth=("neo4j","password"))
graph_response = graph.evaluate(graph_query)
For the above code, I replaced with the driver code as below, but its not working, what is the function in neo4j driver equivalent to evaluate() function in py2neo?
def get_nlg(graph_query):
driver = GraphDatabase.driver("neo4j://localhost:7687", auth=("neo4j","password"))
with driver.session() as session:
graph_response = session.run(graph_query)
return graph_response
When the result from graph_response of 2nd code is passed to the below code, I am getting an error
TypeError: <neo4j.work.result.Result object at 0x7f94cf7f31d0> is not JSON serializable
class GetBiggestComponent(Action):
def name(self):
return "action_get_biggest_component"
def run(self, dispatcher, tracker, domain):
query = None
intent = tracker.latest_message['intent']
child_comp = tracker.get_slot('component_type_child')
parent_comp = tracker.get_slot('component_type_parent')
error = None
graph_response = GenerateQuery.get_biggest_component(child_comp, parent_comp)
graph_response['intent_name'] = intent['name']
dispatcher.utter_message(json.dumps(graph_response))
return []
the error is coming when it is passed in the line
dispatcher.utter_message(json.dumps(graph_response))
The output of session.run is an object that lets you navigate the result, not the result itself. I strongly recommend reading the manual and API docs for the driver, as this is all described in there.
https://neo4j.com/docs/driver-manual/current/session-api/simple/#driver-simple-result-consume
https://neo4j.com/docs/api/python-driver/current/api.html#result
As per my answer to your other question, to simulate evaluate, you will simply need to navigate to the first record of the result and then return the first value of that record.

Saxonica - .NET API - XQuery - XPDY0002: The context item for axis step root/descendant::xxx is absent

I'm getting same error as this question, but with XQuery:
SaxonApiException: The context item for axis step ./CLIENT is absent
When running from the command line, all is good. So I don't think there is a syntax problem with the XQuery itself. I won't post the input file unless needed.
The XQuery is displayed with a Console.WriteLine before the error appears:
----- Start: XQUERY:
(: FLWOR = For Let Where Order-by Return :)
<MyFlightLegs>
{
for $flightLeg in //FlightLeg
where $flightLeg/DepartureAirport = 'OKC' or $flightLeg/ArrivalAirport = 'OKC'
order by $flightLeg/ArrivalDate[1] descending
return $flightLeg
}
</MyFlightLegs>
----- End : XQUERY:
Error evaluating (<MyFlightLegs {for $flightLeg in root/descendant::FlightLeg[DepartureAirport = "OKC" or ArrivalAirport = "OKC"] ... return $flightLeg}/>) on line 4 column 20
XPDY0002: The context item for axis step root/descendant::FlightLeg is absent
I think that like the other question, maybe my input XML file is not properly specified.
I took the samples/cs/ExamplesHE.cs run method of the XQuerytoStream class.
Code there for easy reference is:
public class XQueryToStream : Example
{
public override string testName
{
get { return "XQueryToStream"; }
}
public override void run(Uri samplesDir)
{
Processor processor = new Processor();
XQueryCompiler compiler = processor.NewXQueryCompiler();
compiler.BaseUri = samplesDir.ToString();
compiler.DeclareNamespace("saxon", "http://saxon.sf.net/");
XQueryExecutable exp = compiler.Compile("<saxon:example>{static-base-uri()}</saxon:example>");
XQueryEvaluator eval = exp.Load();
Serializer qout = processor.NewSerializer();
qout.SetOutputProperty(Serializer.METHOD, "xml");
qout.SetOutputProperty(Serializer.INDENT, "yes");
qout.SetOutputStream(new FileStream("testoutput.xml", FileMode.Create, FileAccess.Write));
Console.WriteLine("Output written to testoutput.xml");
eval.Run(qout);
}
}
I changed to pass the Xquery file name, the xml file name, and the output file name, and tried to make a static method out of it. (Had success doing the same with the XSLT processor.)
static void DemoXQuery(string xmlInputFilename, string xqueryInputFilename, string outFilename)
{
// Create a Processor instance.
Processor processor = new Processor();
// Load the source document
DocumentBuilder loader = processor.NewDocumentBuilder();
loader.BaseUri = new Uri(xmlInputFilename);
XdmNode indoc = loader.Build(loader.BaseUri);
XQueryCompiler compiler = processor.NewXQueryCompiler();
//BaseUri is inconsistent with Transform= Processor?
//compiler.BaseUri = new Uri(xqueryInputFilename);
//compiler.DeclareNamespace("saxon", "http://saxon.sf.net/");
string xqueryFileContents = File.ReadAllText(xqueryInputFilename);
Console.WriteLine("----- Start: XQUERY:");
Console.WriteLine(xqueryFileContents);
Console.WriteLine("----- End : XQUERY:");
XQueryExecutable exp = compiler.Compile(xqueryFileContents);
XQueryEvaluator eval = exp.Load();
Serializer qout = processor.NewSerializer();
qout.SetOutputProperty(Serializer.METHOD, "xml");
qout.SetOutputProperty(Serializer.INDENT, "yes");
qout.SetOutputStream(new FileStream(outFilename,
FileMode.Create, FileAccess.Write));
eval.Run(qout);
}
Also two questions regarding "BaseURI".
1. Should it be a directory name, or can it be same as the Xquery file name?
2. I get this compile error: "Cannot implicity convert to "System.Uri" to "String".
compiler.BaseUri = new Uri(xqueryInputFilename);
It's exactly the same thing I did for XSLT which worked. But it looks like BaseUri is a string for XQuery, but a real Uri object for XSLT? Any reason for the difference?
You seem to be asking a whole series of separate questions, which are hard to disentangle.
Your C# code appears to be compiling the query
<saxon:example>{static-base-uri()}</saxon:example>
which bears no relationship to the XQuery code you supplied that involves MyFlightLegs.
The MyFlightLegs query uses //FlightLeg and is clearly designed to run against a source document containing a FlightLeg element, but your C# code makes no attempt to supply such a document. You need to add an eval.ContextItem = value statement.
Your second C# fragment creates an input document in the line
XdmNode indoc = loader.Build(loader.BaseUri);
but it doesn't supply it to the query evaluator.
A base URI can be either a directory or a file; resolving relative.xml against file:///my/dir/ gives exactly the same result as resolving it against file:///my/dir/query.xq. By convention, though, the static base URI of the query is the URI of the resource (eg file) containing the source query text.
Yes, there's a lot of inconsistency in the use of strings versus URI objects in the API design. (There's also inconsistency about the spelling of BaseURI versus BaseUri.) Sorry about that; you're just going to have to live with it.
Bottom line solution based on Michael Kay's response; I added this line of code after doing the exp.Load():
eval.ContextItem = indoc;
The indoc object created earlier is what relates to the XML input file to be processed by the XQuery.

Reactive way of implementing 'standard pagination'

I am just starting with Spring Reactor and want to implement something that I would call 'standard pagination', don't know if there is technical term for this. Basically no matter what start and end date is passed to method, I want to return same amound of data, evenly distributed.
This will be used for some chart drawing in the future.
I figured out rough copy with algorithm that does exactly that, unfortunatelly before I can filter results I need to either count() or take last index() and block to get this number.
This block is surelly not the reactive way to do this, also it makes flux to call DB twice for data (or am I missing something?)
Is there any operator than can help me and get result from count() somehow down the stream for further usage, it would need to compute anyway before stream can be processed, but to get rid of calling DB two times?
I am using mongoDB reactive driver.
Flux<StandardEntity> results = Flux.from(
mongoCollectionManager.getCollection(channel)
.find( and(gte("lastUpdated", begin), lte("lastUpdated", end))))
.map(d -> new StandardEntity(d.getString("price"), d.getString("lastUpdated")));
Long lastIndex = results
.count()
.block();
final double standardPage = 10.0D;
final double step = lastIndex / standardPage;
final double[] counter = {0.0D};
return
results
.take(1)
.mergeWith(
results
.skip(1)
.filter(e -> {
if (lastIndex > standardPage)
if (counter[0] >= step) {
counter[0] = counter[0] - step + 1;
return true;
} else {
counter[0] = counter[0] + 1;
return false;
}
else
return true;
}));

Count of the biggest bin in histogram, C#, sharp

I want to make histogram of my data so, I use histogram class at c# using MathNet.Numerics.Statistics.
double[] array = { 2, 2, 5,56,78,97,3,3,5,23,34,67,12,45,65 };
Vector<double> data = Vector<double>.Build.DenseOfArray(array);
int binAmount = 3;
Histogram _currentHistogram = new Histogram(data, binAmount);
How can I get the count of the biggest bin? Or just the index of the bigest bin? I try to get it by using GetBucketOf but to do this I need the element in this bucket :(
Is there any other way to do this? I read the documentation and Google and I can't find anything.
(Hi, I would use a comment for this but i just joined so today and don't yet have 50 reputation to comment!) I just had a look at - http://numerics.mathdotnet.com/api/MathNet.Numerics.Statistics/Histogram.htm. That documentation page (footer says it was built using http://docu.jagregory.com/) shows a public property named Item which returns a Bucket. I'm wondering if that is the property you need to use because the automatically generated documentation states that the Item property "Gets' the n'th bucket" but isn't clear how the Item property acts as an indexer. Looking at your code i would try _currentHistogram.Item[n] first (if that doesn't work try _currentHistogram[n]) where you are iterating the Buckets in the histogram using something like -
var countOfBiggest = -1;
var indexOfBiggest = -1;
for (var n = 0; n < _currentHistogram.BucketCount; n++)
{
if (_currentHistogram.Item[n].Count > countOfBiggest)
{
countOfBiggest = _currentHistogram.Item[n].Count;
indexOfBiggest = n;
}
}
The code above assumes that Histogram uses 0-based and not 1-based indexing.

Wrapping a sequence in a Stream in F#

I have a function that accepts a Stream. My data is in a large list, running into millions of items.
Is there a simple way I can wrap a sequence in a Stream, returning chunks of my sequence in the stream? One obvious approach is to implement my own stream class that returns chunks of the sequence. Something like :
type SeqStream(sequence:seq<'a>) =
inherit Stream()
default x.Read(buf, offset, count) =
// get next chunk
// yield chunk
Is there a simpler way of doing it? I don't have the means to change the target function that accepts a stream though.
I think that your approach looks good. The only problem is that Stream is a relatively complicated class that has quite a few members and you probably don't want to implement most of them - if you want to pass it to some code that uses some of the additional members, you'll need to make the implementation more complex. Anyway, a simple stream that implements only Read can look like this:
type SeqStream<'a>(sequence:seq<'a>, formatter:'a -> byte[]) =
inherit Stream()
// Keeps bytes that were read previously, but were not used
let temp = ResizeArray<_>()
// Enumerator for reading data from the sequence
let en = sequence.GetEnumerator()
override x.Read(buffer, offset, size) =
// Read next element and add it to temp until we have enough
// data or until we reach the end of the sequence
while temp.Count < size && en.MoveNext() do
temp.AddRange(formatter(en.Current))
// Copy data to the output & return count (may be less then
// required (at the end of the sequence)
let ret = min size temp.Count
temp.CopyTo(0, buffer, offset, ret)
temp.RemoveRange(0, ret)
ret
override x.Seek(offset, dir) = invalidOp "Seek"
override x.Flush() = invalidOp "Flush"
override x.SetLength(l) = invalidOp "SetLength"
override x.Length = invalidOp "Length"
override x.Position
with get() = invalidOp "Position"
and set(p) = invalidOp "Position"
override x.Write(buffer, offset, size) = invalidOp "Write"
override x.CanWrite = false
override x.CanSeek = false
override x.CanRead = true
Note that I added an additional parameter - a function to convert value of the generic type to a byte array. In general, it is difficult to convert anything to bytes (you could use some serialization), so this is probably easier. For example, for integers you can write:
let stream = new SeqStream<_>([ 1 .. 5 ], System.BitConverter.GetBytes)

Resources