I have developed an application using priorities that I have verified using http://esper-epl-tryout.appspot.com/epltryout/mainform.html, with a set of input events.
Using a Java environment with esper 5.3, and using always the same input data, I find two scenarios:
If I configure priorities using config.getEngineDefaults ().
GetExecution (). SetPrioritized (true) and I subscribe to the sentence
xxx, every time I run the application, it produces a different
number of subscriber activations of the sentence xxx.
If I don't configure priorities, every time I run the application
always produces the same number of subscriber activations
In the case of setting the priorities (the first option) what reason can be behind this not deterministic behavior?
Any kind of help will be very appreciated
Related
I'm trying to log data points for the same metric across multiple runs (wandb.init is called repeatedly in between each data point) and I'm unsure how to avoid the behavior seen in the attached screenshot...
Instead of getting a line chart with multiple points, I'm getting a single data point with associated statistics. In the attached e.g., the 1st data point was generated at step 1,470 and the 2nd at step 2,940...rather than seeing two points, I'm instead getting a single point that's the average and appears at step 2,205.
My hunch is that using the resume run feature may address my problem, but even testing out this hunch is proving to be cumbersome given the constraints of the system I'm working with...
Before I invest more time in my hypothesized solution, could someone confirm that the behavior I'm seeing is, indeed, the result of logging data to the same metric across separate runs without using the resume feature?
If this is the case, can you confirm or deny my conception of how to use resume?
Initial run:
run = wandb.init()
wandb_id = run.id
cache wandb_id for successive runs
Successive run:
retrieve wandb_id from cache
wandb.init(id=wandb_id, resume="must")
Is it also acceptable / preferable to replace 1. and 2. of the initial run with:
wandb_id = wandb.util.generate_id()
wandb.init(id=wandb_id)
It looks like you’re grouping runs so that could be why it’s appearing as averaging across step - this might not be the case but it’s worth trying. Turn off grouping by clicking the button in the centre above your runs table on the left - it’s highlighted in purple in the image below.
Both of the ways you’re suggesting resuming runs seem fine.
My hunch is that using the resume run feature may address my problem,
Indeed, providing a cached id in combination with resume="must" fixed the issue.
Corresponding snippet:
import wandb
# wandb run associated with evaluation after first N epochs of training.
wandb_id = wandb.util.generate_id()
wandb.init(id=wandb_id, project="alrichards", name="test-run-3/job-1", group="test-run-3")
wandb.log({"mean_evaluate_loss_epoch": 20}, step=1)
wandb.finish()
# wandb run associated with evaluation after second N epochs of training.
wandb.init(id=wandb_id, resume="must", project="alrichards", name="test-run-3/job-2", group="test-run-3")
wandb.log({"mean_evaluate_loss_epoch": 10}, step=5)
wandb.finish()
I have an UnboundedSource that generates N items (it's not in batch mode, it's a stream -- one that only generates a certain amount of items and then stops emitting new items but a stream nonetheless). Then I apply a certain PTransform to the collection I'm getting from that source. I also apply the Window.into(FixedWindows.of(...)) transform and then group the results by window using Combine. So it's kind of like this:
pipeline.apply(Read.from(new SomeUnboundedSource(...)) // extends UnboundedSource
.apply(Window.into(FixedWindows.of(Duration.millis(5000))))
.apply(new SomeTransform())
.apply(Combine.globally(new SomeCombineFn()).withoutDefaults())
And I assumed that would mean new events are generated for 5 seconds, then SomeTransform is applied to the data in the 5 seconds window, then a new set of data is polled and therefore generated. Instead all N events are generated first, and only after that is SomeTransform applied to the data (but the windowing works as expected). Is it supposed to work like this? Does Beam and/or the runner (I'm using the Flink runner but the Direct runner seems to exhibit the same behavior) have some sort of queue where it stores items before passing it on to the next operator? Does that depend on what kind of UnboundedSource is used? In my case it's a generator of sorts. Is there a way to achieve the behavior that I expected or is it unreasonable? I am very new to working with streaming pipelines in general, let alone Beam. I assume, however, it would be somewhat illogical to try to read everything from the source first, seeing as it's, you know, unbounded.
An important thing to note is that windows in Beam operate on event time, not processing time. Adding 5 second windows to your data is not a way to prescribe how the data should be processed, only the end result of aggregations for that processing. Further, windows only affect the data once an aggregation is reached, like your Combine.globally. Until that point in your pipeline the windowing you applied has no effect.
As to whether it is supposed to work that way, the beam model doesn't specify any specific processing behavior so other runners may process elements slightly differently. However, this is still a correct implementation. It isn't trying to read everything from the source; generally streaming sources in Beam will attempt to read all elements available before moving on and coming back to the source later. If you were to adjust your stream to stream in elements slowly over a long period of time you will likely see more processing in between reading from the source.
I have been trying to set up a custom manipulation station with Kuka IIWA hardware in drake. I got the hardware interface working. When running a joint teleoperation code (adapted from drake/examples/manipulation_station/joint_teleop.py), the robot jerks violently (all joints tries to move to 0 position) at first and then continues to operate normally. On digging deeper, I found that this is caused by the FirstOrderLowPassFilter system. While advancing the simulation a tiny bit (simulator.AdvanceTo(1e-6)) to evaluate the LCM messages to set the initial GUI sliders-filter_initial_output_value-plant joint positions etc., to match the hardware, the FirstOrderLowPassFilter outputs a momentary value of 0. This sets the IIWA_COMMAND position to zero for an instance and causes a jerk.
How can I avoid this behavior?.
As a workaround, I am subscribing separately to the raw LCM message from the hardware, before initializing the drake systems and sets the filter_initial_output_value before advancing the simulation. Is this the recommended way?.
I think what you're doing (manually reading the LCM message) is fine.
In the alternative, look how a DiscreteDerivative offers the suppress_initial_transient = true option. Perhaps we could add a similar option (via unrestricted update event) to FirstOrderLowPassFilter so that the initial output value was sampled from the input at t == 0. But the event sequencing of startup may still be difficult. We essentially need to initialize the systems in their dataflow order, including refreshing output ports as events fire, which is not natively supported.
In another alternative, perhaps we could configure the IIWA_COMMAND publisher to not publish at t == 0, instead publishing only t >= 0.005.
FirstOrderLowPassFilter has a method to set the initial value. https://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_first_order_low_pass_filter.html#aaef7539cfbf1acfa0cf487c371bc5360
It is used in the example that you copied from:
https://github.com/RobotLocomotion/drake/blob/master/examples/manipulation_station/joint_teleop.py#L146
I'm trying to figure out the best or a reasonable approach to defining alerts in InfluxDB. For example, I might use the CPU batch tickscript that comes with telegraf. This could be setup as a global monitor/alert for all hosts being monitored by telegraf.
What is the approach when you want to deviate from the above setup for a host, ie instead of X% for a specific server we want to alert on Y%?
I'm happy that a distinct tickscript could be created for the custom values but how do I go about excluding the host from the original 'global' one?
This is a simple scenario but this needs to meet the needs of 10,000 hosts of which there will be 100s of exceptions and this will also encompass 10s/100s of global alert definitions.
I'm struggling to see how you could use the platform as the primary source of monitoring/alerting.
As said in the comments, you can use the sideload node to achieve that.
Say you want to ensure that your InfluxDB servers are not overloaded. You may want to allow 100 measurements by default. Only on one server, which happens to get a massive number of datapoints, you want to limit it to 10 (a value which is exceeded by the _internal database easily, but good for our example).
Given the following excerpt from a tick script
var data = stream
|from()
.database(db)
.retentionPolicy(rp)
.measurement(measurement)
.groupBy(groupBy)
.where(whereFilter)
|eval(lambda: "numMeasurements")
.as('value')
var customized = data
|sideload()
.source('file:///etc/kapacitor/customizations/demo/')
.order('hosts/host-{{.hostname}}.yaml')
.field('maxNumMeasurements',100)
|log()
var trigger = customized
|alert()
.crit(lambda: "value" > "maxNumMeasurements")
and the name of the server with the exception being influxdb and the file /etc/kapacitor/customizations/demo/hosts/host-influxdb.yaml looking as follows
maxNumMeasurements: 10
A critical alert will be triggered if value and hence numMeasurements will exceed 10 AND the hostname tag equals influxdb OR if value exceeds 100.
There is an example in the documentation handling scheduled downtimes using sideload
Furthermore, I have created an example available on github using docker-compose
Note that there is a caveat with the example: The alert flaps because of a second database dynamically generated. But it should be sufficient to show how to approach the problem.
What is the cost of using sideload nodes in terms of performance and computation if you have over 10 thousand servers?
Managing alerts manually directly in Chronograph/Kapacitor is not feasible for big number of custom alerts.
At AMMP Technologies we need to manage alerts per database, customer, customer_objects. The number can go into the 1000s. We've opted for a custom solution where keep a standard set of template tickscripts (not to be confused with Kapacitor templates), and we provide an interface to the user where only expose relevant variables. After that a service (written in python) combines the values for those variables with a tickscript and using the Kapacitor API deploys (updates, or deletes) the task on the Kapacitor server. This is then automated so that data for new customers/objects is combined with the templates and automatically deployed to Kapacitor.
You obviously need to design your tasks to be specific enough so that they don't overlap and generic enough so that it's not too much work to create tasks for every little thing.
By default Xcodes performance tests are run ten times and my result is the average of those ten tests. The problem is the averaged result varies considerably each time I run it so I have to run the test at least five times to get a converged result. This is both tedious and time consuming; is there a way to configure either XCode or the unit test itself to run more than ten times?
a class dump of XCTestCase exposes this method:
- (void)_recordValues:(id)arg1 forPerformanceMetricID:(id)arg2 name:(id)arg3 unitsOfMeasurement:(id)arg4 baselineName:(id)arg5 baselineAverage:(id)arg6 maxPercentRegression:(id)arg7 maxPercentRelativeStandardDeviation:(id)arg8 maxRegression:(id)arg9 maxStandardDeviation:(id)arg10 file:(id)arg11 line:(unsigned long long)arg12;
when this method is swizzled the first parameter (arg1) has the 10 durations:
["0.003544568",
"0.003456569",
"0.003198263",
"0.003257955",
"0.003508724",
"0.003454298",
"0.003461192",
"0.00423787",
"0.003359195",
"0.003335757"]
i added 4 new values (1.0, 2.0, 3.0, 4.0) to the end of this list before passing it back to the original implementation, but unfortunately a different class that observes, XCTestLog, has an internal sanity check that gets tripped:
Assertion failure in +[XCTestLog _messageForTest:didMeasureValues:forPerformanceMetricID:name:unitsOfMeasurement:baselineName:baselineAverage:maxPercentRegression:maxPercentRelativeStandardDeviation:maxRegression:maxStandardDeviation:file:line:]
caught "NSInternalInconsistencyException", "Performance Metrics must provide 10 measurements."
once the XCTestLog method is also overridden so it doesn't assert, the additional 4 values can be added without any complaints. unfortunately the view still only shows the 10 results.
it does however update the total time + standard deviation values in the mini view.
Before Swizzling
After Swizzling and adding 4 values
in order to view more than 10 results one would probably have to tweak the XCode runtime to tell the table to show more items.
The short answer: No, there is no current interface exposed to allow a measure block more than ten times.
The longer answer: No, but there is an interface exposed to modify certain metrics of the measure block. The default metrics are in a String array returned from defaultPerformanceMetrics. There appears to only be one metric supported right now: XCTPerformanceMetric_WallClockTime. This only specifies a performance metric that records the time in seconds between calls to a performance test's startMeasuring() and stopMeasuring() methods, and not the number of times the block is run.
In the latest Xcode (11.0+) you don't need swizzling to change iterations count. Use the following function:
func measure(options: XCTMeasureOptions, block: () -> Void)
This will allow you to specify XCTMeasureOptions which has iterationCount property.
Interesting note from docs:
A performance test runs its block iterationCount+1 times, ignoring the first iteration and recording metrics for the remaining iterations. The test ignores the first iteration to reduce measurement variance associated with “warming up” caches and other first-run behavior.
I typically expand the Standard Deviation value to include the range I'd accept.
One obvious workaround for you is to perform the operation 5 times within the measure block you are executing. (You'll want to change the expected time.)
measure {
for _ in [1...5] {
// ...
}
}