Split a Single Subscription Stream into 2 Steams in Dart - dart

I have a Stream of messages : some are requests, other are responses.
I would like to create 2 Streams from my source.
_responseStream = _sourceStream.transform(decoder).where((message) => message.isResponse());
_requestStream = _sourceStream.transform(decoder).where((message) => message.isRequest());
Is this solution optimized ? (in term of performance or other).
Thanks in advance.

https://pub.dartlang.org/packages/async contains StreamSplitter that allows to do that.
final multiStream = StreamSplitter(_sourcStream.transform(decoder));
_responseStream = _multiStream.split().where((message) => message.isResponse());
_requestStream = _multiStream.split().where((message) => message.isRequest());
multiStream.close();
or
final streams = StreamSplitter.splitFrom(_sourcStream.transform(decoder), 2); // 2 is the default and can be omitted

Related

jenetics: Set EvolutionStream limit outside of stream()

There are several possibilities in jenetics to set termination limits to EvolutionStream, see the documentation.
The limits are usually applied directly on the stream, e.g.
Phenotype<IntegerGene,Double> result = engine.stream()
.limit(Limits.bySteadyFitness(10))
.collect(EvolutionResult.toBestPhenotype());
or
Phenotype<IntegerGene,Double> result = engine.stream()
.limit(Limits.byFixedGeneration(10))
.collect(EvolutionResult.toBestPhenotype());
or in combination, see example:
Phenotype<IntegerGene,Double> result = engine.stream()
.limit(Limits.bySteadyFitness(10))
.limit(Limits.byFixedGeneration(10))
.collect(EvolutionResult.toBestPhenotype());
In my optimization problem, I want to let the user decide which limits to assign to the problem. I do not know the limit setup in advance. It might be multiple limits. Therefore, I have to assign the limit types at runtime.
I tried to create a EvolutionStream object by
EvolutionStream<IntegerGene, Double> evolutionStream = engine.stream();
and assign the limits on the evolutionStream:
Stream<EvolutionResult<IntegerGene, Double>> limit = evolutionStream.limit(Limits.byFixedGeneration(10));
The result is a Stream, which does not know the EvolutionStream specific limit methods. Thus, I can not apply it in case multiple limits are defined. Trying to cast
evolutionStream = (EvolutionStream<IntegerGene, Double>)evolutionStream.limit(Limits.byFixedGeneration(10));
results in an error:
java.lang.ClassCastException: class java.util.stream.SliceOps$1 cannot be cast to class io.jenetics.engine.EvolutionStream (java.util.stream.SliceOps$1 is in module java.base of loader 'bootstrap'; io.jenetics.engine.EvolutionStream is in unnamed module of loader 'app')
So, is there a way to properly apply multiple limits outside the stream builder?
The EvolutionStream.limit(Predicate) method does return an EvolutionStream.
EvolutionStream<IntegerGene, Double> stream = engine.stream();
stream = stream
.limit(Limits.byFixedGeneration(10))
.limit(Limits.bySteadyFitness(5))
.limit(Limits.byExecutionTime(Duration.ofMillis(100)));
So your given examples look good and should work. But the EvolutionStream.limit(Predicate) method is the only method which gives you back an EvolutionStream.
An alternative would be that your method, which initializes the EvolutionStream, takes the Predicates from outside.
#SafeVarargs
static EvolutionStream<IntegerGene, Double>
newStream(final Predicate<? super EvolutionResult<IntegerGene, Double>>... limits) {
final Engine<IntegerGene, Double> engine = Engine
.builder(a -> a.gene().allele().doubleValue(), IntegerChromosome.of(0, 100))
.build();
EvolutionStream<IntegerGene, Double> stream = engine.stream();
for (var limit : limits) {
stream = stream.limit(limit);
}
return stream;
}
final var stream = newStream(
Limits.byFixedGeneration(100),
Limits.byExecutionTime(Duration.ofMillis(1000)),
Limits.bySteadyFitness(10)
);

How to clean/clear cache of a SubjectReplay instance in rxjs?

I have a SubjectReplay and I would like to reset it to no cache values so after this reset the next subscriber doesn't get the history?
Example
new subject replay
subject.next(1)
reset subject <- this question
subject.subscribe() // should NOT receive 1
how can I do this? I need the subject to be the same instance.
You may look at using a combination of special values and filter operator to get something close to what you are trying to achieve.
Let's make a simple case.
You want to replay just the last value and null is the special value representing the reset. The code would be
const rs = new ReplaySubject<any>(1); // replay the last value
const rsObs = rs.asObservable().pipe(filter(d => d !== null));
rs.next(1);
rs.next(2);
setTimeout(() => {
console.log('first subscription');
rsObs.subscribe(console.log) // logs 2 on the console
}, 10);
setTimeout(() => {
rs.next(null);
}, 20);
setTimeout(() => {
console.log('second subscription');
rsObs.subscribe(console.log) // nothing is logged
}, 30);
The best way that comes to my mind with your requirement
I need the subject to be the same instance
would be to have the following observables:
// This is your input
const source$: Observable<T>;
const proxy$ = new ReplaySubject<T>(n);
const reset$ = new BehaviorSubject<number>(0);
Now it's important that we hook up the following before you emit on source$:
source$.pipe(timestamp()).subscribe(proxy$);
Then, finally, you can expose your data like this:
const data$ = proxy$.pipe(
withLatestFrom(reset$),
filter(([timedItem, resetTimestamp]) => timedItem.timestamp > resetTimestamp),
map(([timedItem]) => timedItem.value),
);
You can now use reset$.next(+new Date()) to trigger the reset.
If you can make sure to provide timestamped values to source$, you can skip the proxy$.

Making a variable number of parallel HTTP requests with Gatling?

I am trying to model a server-to-server REST API interaction in Gatling 2.2.0. There are several interactions of the type "request a list and then request all items on the list at in parallel", but I can't seem to model this in Gatling. Code so far:
def groupBy(dimensions: Seq[String], metric: String) = {
http("group by")
.post(endpoint)
.body(...).asJSON
.check(
...
.saveAs("events")
)
}
scenario("Dashboard scenario")
.exec(groupBy(dimensions, metric)
.resources(
// a http() for each item in session("events"), plz
)
)
I have gotten as far as figuring out that parallel requests are performed by .resources(), but I don't understand how to generate a list of requests to feed it. Any input is appreciated.
Below approach is working for me. Seq of HttpRequestBuilder will be executed concurrently:
val numberOfParallelReq = 1000
val scn = scenario("Some scenario")
.exec(
http("first request")
.post(url)
.resources(parallelRequests: _*)
.body(StringBody(firstReqBody))
.check(status.is(200))
)
def parallelRequests: Seq[HttpRequestBuilder] =
(0 until numberOfParallelReq).map(i => generatePageRequest(i))
def generatePageRequest(id: Int): HttpRequestBuilder = {
val body = "Your request body here...."
http("page")
.post(url)
.body(StringBody(body))
.check(status.is(200))
}
Not very sure of your query but seems like you need to send parallel request which can be done by
setUp(scenorio.inject(atOnceUsers(NO_OF_USERS)));
Refer this http://gatling.io/docs/2.0.0-RC2/general/simulation_setup.html

How can one to dynamically parse a CSV file using C# and the Smart Format Detector in FileHelpers 3.1?

As per in this FileHelpers 3.1 example, you can automatically detect a CSV file format using the FileHelpers.Detection.SmartFormatDetector class.
But the example goes no further. How do you use this information to dynamically parse a CSV file? It must have something to do with the DelimitedFileEngine but I cannot see how.
Update:
I figured out a possible way but had to resort to using reflection (which does not feel right). Is there another/better way? Maybe using System.Dynamic? Anyway, here is the code I have so far, it ain't pretty but it works:
// follows on from smart detector example
FileHelpers.Detection.RecordFormatInfo lDetectedFormat = formats[0];
Type lDetectedClass = lDetectedFormat.ClassBuilderAsDelimited.CreateRecordClass();
List<FieldInfo> lFieldInfoList = new List<FieldInfo>(lDetectedFormat.ClassBuilderAsDelimited.FieldCount);
foreach (FileHelpers.Dynamic.DelimitedFieldBuilder lField in lDetectedFormat.ClassBuilderAsDelimited.Fields)
lFieldInfoList.Add(lDetectedClass.GetField(lField.FieldName));
FileHelperAsyncEngine lFileEngine = new FileHelperAsyncEngine(lDetectedClass);
int lRecNo = 0;
lFileEngine.BeginReadFile(cReadingsFile);
try
{
while (true)
{
object lRec = lFileEngine.ReadNext();
if (lRec == null)
break;
Trace.WriteLine("Record " + lRecNo);
lFieldInfoList.ForEach(f => Trace.WriteLine(" " + f.Name + " = " + f.GetValue(lRec)));
lRecNo++;
}
}
finally
{
lFileEngine.Close();
}
As I use the SmartFormatDetector to determine the exact format of the incoming Delimited files you can use following appoach:
private DelimitedClassBuilder GetFormat(string file)
{
var detector = new FileHelpers.Detection.SmartFormatDetector();
var format = detector.DetectFileFormat(file);
return format.First().ClassBuilderAsDelimited;
}
private List<T> ConvertFile2Objects<T>(string file, out DelimitedFileEngine engine)
{
var format = GetSeperator(file); // Get Here your FormatInfo
engine = new DelimitedFileEngine(typeof(T)); //define your DelimitdFileEngine
//set some Properties of the engine with what you need
engine.ErrorMode = ErrorMode.SaveAndContinue; //optional
engine.Options.Delimiter = format.Delimiter;
engine.Options.IgnoreFirstLines = format.IgnoreFirstLines;
engine.Options.IgnoreLastLines = format.IgnoreLastLines;
//process
var ret = engine.ReadFileAsList(file);
this.errorCount = engine.ErrorManager.ErrorCount;
var err = engine.ErrorManager.Errors;
engine.ErrorManager.SaveErrors("errors.out");
//return records do here what you need
return ret.Cast<T>().ToList();
}
This is an approach I use in a project, where I only know that I have to process Delimited files of multiple types.
Attention:
I noticed that with the files I recieved the SmartFormatDetector has a problem with tab delimiter. Maybe this should be considered.
Disclaimer: This code is not perfected but in a usable state. Modification and/or refactoring is adviced.

Pc-Stable from pcalg

I am using the pc-stable from the package ‘pcalg’ version 2.0-10 to learn the structure . what I understand this algorithm does not effect the the order of the input data because it is order_independent. when I run it with different order ,I got different graph. can any one help me with this issue and this is my code.
library(pracma)
randindexMatriax <- matrix(0,10,ncol(TrainData))
numberUnique_val_col = vector()
pdf("Graph for Test PC Stable with random order.pdf")
par(mfrow=c(2,1))
for (i in 1:10)
{
randindex<-randperm(1:ncol(TrainData))
randindexMatriax[i,]<-randindex
TrainDataRandOrder<-data[,randindex]
V <- colnames( TrainDataRandOrder)
UD <-data.frame(TrainDataRandOrder)
numberUnique_val_col= sapply(UD,function(x)length(unique(x)))
suffStat <- list(dm = TrainDataRandOrder,nlev = c(numberUnique_val_col[1],numberUnique_val_col[2], numberUnique_val_col[3],numberUnique_val_col[4],
numberUnique_val_col[5],numberUnique_val_col[6], numberUnique_val_col[7],
numberUnique_val_col[8],numberUnique_val_col[9],
numberUnique_val_col[10],numberUnique_val_col[11],
numberUnique_val_col[12],numberUnique_val_col[13],
numberUnique_val_col[14],numberUnique_val_col[15],
numberUnique_val_col[16],numberUnique_val_col[17],
numberUnique_val_col[18],numberUnique_val_col[19], numberUnique_val_col[20]), adaptDF = FALSE)
pc.fit <- pc(suffStat, indepTest= disCItest, alpha=0.01, labels=V, fixedGaps = NULL, fixedEdges = NULL,NAdelete = TRUE, m.max = Inf,skel.method = "stable", conservative = TRUE,solve.confl = TRUE, verbose = TRUE)
The "Stable" part of PC-Stable only affects the Skeleton phase of the algorithm. The Orientation phase is still order-dependent. Do the two graphs have identical "skeletons"? That is, if you convert all directed edges into undirected edges, are the two graphs identical?
If not, you may have uncovered a bug in pcalg! Please post a sample dataset and two orderings of the columns that produce graphs with different skeletons.

Resources