Caliper error Use microbenchmark instrument - microbenchmark

I don't seem to be able to get started here.
I pulled down the code and built caliper myself, which solved my first set of problems, but now I am getting errors that I need a microinstrument
This experiment requires a microbenchmark. The granularity of the timer (535ns) is greater than 0.1% of the measured runtime (331.0μs). Use the microbenchmark instrument for accurate measurements.
so, how do I set the micro instrument?
I'm using #Benchmark before the method I want to run, and inside my main looks like this:
CaliperMain.main(
MyClass.class,
new String[]{ "-i", "runtime" }
);
I tried changing runtime to "micro", but that didn't work. Any ideas?

I guess, all you need is to use
#Benchmark int time(int reps) {
// repeat reps times what you want to measure
}
instead of
#Benchmark int time() {
}

Related

RxJava2 order of sequence called with compleatable andThen operator

I am trying to migrate from RxJava1 to RxJava2. I am replacing all code parts where I previously had Observable<Void> to Compleatable. However I ran into one problem with order of stream calls. When I previously was dealing with Observables and using maps and flatMaps the code worked 'as expected'. However the andthen() operator seems to work a little bit differently. Here is a sample code to simplify the problem itself.
public Single<String> getString() {
Log.d("Starting flow..")
return getCompletable().andThen(getSingle());
}
public Completable getCompletable() {
Log.d("calling getCompletable");
return Completable.create(e -> {
Log.d("doing actuall completable work");
e.onComplete();
}
);
}
public Single<String> getSingle() {
Log.d("calling getSingle");
if(conditionBasedOnActualCompletableWork) {
return getSingleA();
}else{
return getSingleB();
}
}
What I see in the logs in the end is :
1-> Log.d("Starting flow..")
2-> Log.d("calling getCompletable");
3-> Log.d("calling getSingle");
4-> Log.d("doing actuall completable work");
And as you can probably figure out I would expect line 4 to be called before line 3 (afterwards the name of andthen() operator suggest that the code would be called 'after' Completable finishes it's job). Previously I was creating the Observable<Void> using the Async.toAsync() operator and the method which is now called getSingle was in flatMap stream - it worked like I expected it to, so Log 4 would appear before 3. Now I tried changing the way the Compleatable is created - like using fromAction or fromCallable but it behaves the same. I also couldn't find any other operator to replace andthen(). To underline - the method must be a Completable since it doesn't have any thing meaning full to return - it changes the app preferences and other settings (and is used like that globally mostly working 'as expected') and those changes are needed later in the stream. I also tried to wrap getSingle() method to somehow create a Single and move the if statement inside the create block but I don't know how to use getSingleA/B() methods inside there. And I need to use them as they have their complexity of their own and it doesn't make sense to duplicate the code. Any one have any idea how to modify this in RxJava2 so it behaves the same? There are multiple places where I rely on a Compleatable job to finish before moving forward with the stream (like refreshing session token, updating db, preferences etc. - no problem in RxJava1 using flatMap).
You can use defer:
getCompletable().andThen(Single.defer(() -> getSingle()))
That way, you don't execute the contents of getSingle() immediately but only when the Completablecompletes and andThen switches to the Single.

SideInputs kill dataflow performance

I'm using dataflow to generate a large amount of data.
I've tested two versions of my pipeline: one with a side input (of varying sizes), and other other without.
When I run the pipeline without the side input, my job will finish in about 7 minutes. When I run my job with the side input, my job will never finish.
Here's what my DoFn looks like:
public class MyDoFn extends DoFn<String, String> {
final PCollectionView<Map<String, Iterable<TreeMap<Long, Float>>>> pCollectionView;
final List<CSVRecord> stuff;
private Aggregator<Integer, Integer> dofnCounter =
createAggregator("DoFn Counter", new Sum.SumIntegerFn());
public MyDoFn(PCollectionView<Map<String, Iterable<TreeMap<Long, Float>>>> pcv, List<CSVRecord> m) {
this.pCollectionView = pcv;
this.stuff = m;
}
#Override
public void processElement(ProcessContext processContext) throws Exception {
Map<String, Iterable<TreeMap<Long, Float>>> pdata = processContext.sideInput(pCollectionView);
processContext.output(AnotherClass.generateData(stuff, pdata));
dofnCounter.addValue(1);
}
}
And here's what my pipeline looks like:
final Pipeline p = Pipeline.create(PipelineOptionsFactory.fromArgs(args).withValidation().create());
PCollection<KV<String, TreeMap<Long, Float>>> data;
data = p.apply(TextIO.Read.from("gs://where_the_files_are/*").named("Reading Data"))
.apply(ParDo.named("Parsing data").of(new DoFn<String, KV<String, TreeMap<Long, Float>>>() {
#Override
public void processElement(ProcessContext processContext) throws Exception {
// Parse some data
processContext.output(KV.of(key, value));
}
}));
final PCollectionView<Map<String, Iterable<TreeMap<Long, Float>>>> pcv =
data.apply(GroupByKey.<String, TreeMap<Long, Float>>create())
.apply(View.<String, Iterable<TreeMap<Long, Float>>>asMap());
DoFn<String, String> dofn = new MyDoFn(pcv, localList);
p.apply(TextIO.Read.from("gs://some_text.txt").named("Sizing"))
.apply(ParDo.named("Generating the Data").withSideInputs(pvc).of(dofn))
.apply(TextIO.Write.named("Write_out").to(outputFile));
p.run();
We've spent about two days trying various methods of getting this to work. We've narrowed it down to the inclusion of the side input. If the processContext is modified to not use the side input, it will still be very slow as long as it's included. If we don't call .withSideInput() it's very fast again.
Just to clarify, we've tested this on sideinput sizes from 20mb - 1.5gb.
Very grateful for any insight.
Edit
Including a few job ID's:
2016-01-20_14_31_12-1354600113427960103
2016-01-21_08_04_33-1642110636871153093 (latest)
Please try out the Dataflow SDK 1.5.0+, they should have addressed the known performance problems of your issue.
Side inputs in the Dataflow SDK 1.5.0+ use a new distributed format when running batch pipelines. Note that streaming pipelines and pipelines using older versions of the Dataflow SDK are still subject to re-reading the side input if the view can not be cached entirely in memory.
With the new format, we use an index to provide a block based lookup and caching strategy. Thus when looking into a list by index or looking into a map by key, only the block that contains said index or key will be loaded. Having a cache size which is greater than the working set size will aid in performance as frequently accessed indices/keys will not require re-reading the block they are contained in.
Side inputs in the Dataflow SDK can, indeed, introduce slowness if not used carefully. Most often, this happens when each worker has to re-read the entire side input per main input element.
You seem to be using a PCollectionView created via asMap. In this case, the entire side input PCollection must fit into memory of each worker. When needed, Dataflow SDK will copy this data on each worker to create such a map.
That said, the map on each worker may be created just once or multiple times, depending on several factors. If its size is small enough (usually less than 100 MB), it is likely that the map is read only once per worker and reused across elements and across bundles. However, if its size cannot fit into our cache (or something else evicts it from the cache), the entire map may be re-read again and again on each worker. This is, most often, the root-cause of the slowness.
The cache size is controllable via PipelineOptions as follows, but due to several important bugfixes, this should be used in version 1.3.0 and later only.
DataflowWorkerHarnessOptions opts = PipelineOptionsFactory.fromArgs(args).withValidation().create().cloneAs(DataflowWorkerHarnessOptions.class);
opts.setWorkerCacheMb(500);
Pipeline p = Pipeline.create(opts);
For the time being, the fix is to change the structure of the pipeline to avoid excessive re-reading. I cannot offer you a specific advice there, as you haven't shared enough information about your use case. (Please post a separate question if needed.)
We are actively working on a related feature we refer to as distributed side inputs. This will allow a lookup into the side input PCollection without constructing the entire map on the worker. It should significantly help performance in this and related cases. We expect to release this very shortly.
I didn't see anything particularly suspicious about the two jobs you have quoted. They've been cancelled relatively quickly.
I'm manually setting the cache size when creating the pipeline in the following manner:
DataflowWorkerHarnessOptions opts = PipelineOptionsFactory.fromArgs(args).withValidation().create().cloneAs(DataflowWorkerHarnessOptions.class);
opts.setWorkerCacheMb(500);
Pipeline p = Pipeline.create(opts);
for a side input of ~25mb, this speeds up the execution time considerably (job id 2016-01-25_08_42_52-657610385797048159) vs. creating a pipeline in the manner below (job id 2016-01-25_07_56_35-14864561652521586982)
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create();
However, when the side input is increased to ~400mb, no increase in cache size improves performance. Theoretically, is all the memory indicated by the GCE machine type available for use by the worker? What would invalidate or evict something from the worker cache, forcing the re-read?

Caching streams in Functional Reactive Programming

I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.

Vaadin "A connector with id xy is already registered"

Somewhere in my Vaadin application, I'm getting this exception as soon as I connect using a second browser
Caused by: java.lang.RuntimeException: A connector with id 22 is already registered!
at com.vaadin.ui.ConnectorTracker.registerConnector(ConnectorTracker.java:133)
It happens always in the same place but I don't know why exactly as the reason for this must be somewhere else.
I think I might be stealing UI components from the other session - which is not my intention.
Currently, I don't see any static instances of UI components I might be using in multiple sessions.
How can I debug this? It's become quite a large project.
Any hints to look for?
Yes, this usually happens because you are attaching a component already attached in other session.
Try logging the failed connector with a temporal ConnectorTracker, So the next time that it happens, you can catch it.
For example:
public class SomeUI extends UI {
private ConnectorTracker tracker;
#Override
public ConnectorTracker getConnectorTracker() {
if (this.tracker == null) {
this.tracker = new ConnectorTracker(this) {
#Override
public void registerConnector(ClientConnector connector) {
try {
super.registerConnector(connector);
} catch (RuntimeException e) {
getLogger().log(Level.SEVERE, "Failed connector: {0}", connector.getClass().getSimpleName());
throw e;
}
}
};
}
return tracker;
}
}
I think I might be stealing UI components from the other session - which is not my intention. Currently, I don't see any static instances of UI components I might be using in multiple sessions.
That was it. I was actually stealing UI components without prior knowledge.
It was very well hidden in a part which seems to be same for all instances. Which is true: the algorithm is the same.
Doesn't mean I should've reused the same UI components as well...
Thanks to those who took a closer look.
Here is how I fixed it -
1) look for components you have shared across sessions. For example if you have declared a component as static it will be created once and will be shared.
2) if you are not able to find it and want a work around until you figure out the real problem, put your all addComponent calls in try and in catch add following code -
getUI().getConnectorTracker().markAllConnectorsDirty();
getUI().getConnectorTracker().markAllClientSidesUnititialized();
getPage().reload():
This will clear old connectors and will reload the page properly only when it fails. For me it was failing when I was logged out and logged in back.
Once you find the real problem you can fix it till then inform your customers about the reload.
**** note - only solution is to remove shared components this is just a work around.
By running your application in debug mode (add ?debug at the end of URL in browser) you will be able to browse to the component, e.g:
-UIConnector(0)
--VerticalLayoutConnector(1)
---...
---LabelConnector(22)
where 22 is id from your stack trace. Find this component in your code and make sure that it is not static (yes, I saw such examples).

Use "is" or "as" in this case?

I have these interfaces:
public interface IBaseInterface
{
function Method():void:
}
public interface IExtendedInterface extends IBaseInterface
{
function MethodTwo():void;
}
...and a vector of type "IBaseInterface" I need to iterate through:
var myVector:Vector.<IBaseInterface> = new Vector.<IBaseInterface>();
I need to perform an operation on objects that use IExtendedInterface. Which is the preferred option?
for each(var obj:IBaseInterface in myVector)
{
// Option 1:
var tmp:IExtendedInterface = obj as IExtendedInterface;
if(tmp != null)
tmp.MethodTwo();
// Option 2:
if(obj is IExtendedInterface)
IExtendedInterface(obj).MethodTwo();
}
I'm sure the info I'm looking for is out there, it's just hard to search for "is" and "as"...Thanks in advance!
I tried a little test to find out which is faster, expecting the "as" variant to be slightly better, because a variable assignment and check for null seem less complex than a type comparison (both options include a type cast) - and was proven right.
The difference is minimal, though: At 100000 iterations each, option 1 was consistently about 3 milliseconds(!) faster.
As weltraumpirat says, Option 1 is faster,
But if you are working on your own code, and no one else is going to ever touch it then go for it.
But if its a team collaboration and you are not going to run the operation 100,000 times with mission critical to the millisecond timings, then Option 2 is much easier for someone looking through your code to read.
Especially important if you are giving your code over to a client for further development.

Resources