In Dropwizard there is something like meter:
https://metrics.dropwizard.io/3.1.0/getting-started/#meters
It lets me measure rate of events just by invoking mark() method on the metric.
How can I do that in Micrometer?
I can use timers, but I don't want to pass Timer.Sample object to wherever place where I need to call stop() method.
The other missing thing in Micrometer comparing to Dropwizard is a metric that can contain a text message, like gauge in Dropwizard.
Micrometer leverages the strengths of modern metrics backends. So the specific answer to your question depends on which you are using. Take Prometheus for example. The backend can calculate the rate for you.
If you are measuring the rate of how often something is happening you can determine that using a Counter. Take the logback_events_total counter as an example. It is merely counting the number for log messages written.
When alerting or graphing you can then write a query like rate(logback_events_total[1m]) and you will be able to see the rate at which logs have been writen at the 1m rate. You have the ability to change to window from 1m, to 5m or 1h without changing the code.
Regarding text based metrics, those aren't useful for alerting (but can be useful when using a join clause). The typical solution in that case is to create a gauge with a value of 1 or 0 and make your text value a tag. For example:
registry.gaugle('app.info', Tags.of("version","1.0.beta3", this, () -> 1.0));
We had the same problem. In DropWizard we were able to use meters to get the rate of events per minute, but in Micrometer we could not find a built-in way that worked for us.
We needed rates for counters and percentiles for timers. The PrometheusMeterRegistry gave us percentiles, but no rates.
So we built our own Gauge that tracks a Counter. Every time getValue() is called, it fetches the value from the counter and adds it to the right bucket with the current timestamp. Then from all available measurements it can compute the rate over the last minute.
It looks like this:
import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.MeterRegistry;
import java.util.LinkedList;
import java.util.function.Supplier;
public class OneMinuteRateGauge {
private static final int WINDOW_SECONDS = 60;
private final Supplier<Double> valueSupplier;
private final LinkedList<Bucket> buckets;
private final Clock clock;
public OneMinuteRateGauge(String name, Supplier<Double> valueSupplier, MeterRegistry meterRegistry) {
this(name, valueSupplier, meterRegistry, Clock.SYSTEM);
}
public OneMinuteRateGauge(String name, Supplier<Double> valueSupplier, MeterRegistry meterRegistry, Clock clock) {
this.valueSupplier = valueSupplier;
this.buckets = new LinkedList<>();
Gauge.builder(name, this::getValue).register(meterRegistry);
this.clock = clock;
// Collect one measurement so we have a faster start
getValue();
}
public synchronized double getValue() {
// Update the last bucket or create a new one
long now_millis = clock.monotonicTime() / 1_000_000;
long now_seconds = now_millis / 1_000;
short millis = (short) (now_millis - (now_seconds * 1000));
double value = valueSupplier.get();
if (buckets.size() != 0 && buckets.getLast().getSeconds() == now_seconds) {
buckets.getLast().updateValue(millis, value);
} else {
buckets.addLast(new Bucket(now_seconds, millis, value));
}
// Delete all buckets outside the window except one
while (2 < buckets.size() && buckets.get(1).getSeconds() + WINDOW_SECONDS < now_seconds) {
buckets.pollFirst();
}
if (buckets.size() == 1) {
// Not enough data
return 0;
} else if (now_seconds <= buckets.getFirst().getSeconds() + WINDOW_SECONDS) {
// First bucket is inside the window
return buckets.getLast().getValue() - buckets.getFirst().getValue();
} else {
// Find the weighted average between the first two points
Bucket p0 = buckets.get(0);
Bucket p1 = buckets.get(1);
double px = now_millis - (WINDOW_SECONDS * 1000);
double m = (p1.getValue() - p0.getValue()) / (p1.getTimestampInMillis() - p0.getTimestampInMillis());
double py = m * (px - p0.getTimestampInMillis()) + p0.getValue();
return value - py;
}
}
}
public class Bucket {
private long seconds; // Seconds since 1.1.1970, used as bucket ID
private short millis; // 0-999, used for a more exact calculation
private double value;
public Bucket(long seconds, short millis, double value) {
this.seconds = seconds;
this.millis = millis;
this.value = value;
}
public long getSeconds() {
return seconds;
}
public double getValue() {
return value;
}
public long getTimestampInMillis() {
return seconds * 1000 + millis;
}
public void updateValue(short millis, double value) {
this.millis = millis;
this.value = value;
}
}
An alternative way could have been to use CompositeMeterRegistry on the top level and then add both a PrometheusMeterRegistry and a StepMeterRegistry. Prometheus reports percentiles and Step reports rates. Our monitoring system would then have to query two endpoints.
This was a temporary solution until we modified our monitoring system to read the prometheus endpoint and calculate its own rates.
Related
I would like to set a Timer in Event time that fires based on the smallest timestamp seen in the elements within my DoFn.
For performance reasons the Timer API does not support a read() operation, which for the vast majority of use cases is not a required feature. In the small set of use cases where it is needed, for example when you need to set a Timer in EventTime based on the smallest timestamp seen in the elements within a DoFn, we can make use of a State object to keep track of the value.
Java (SDK 2.10.0)
// In this pattern, a Timer is set to fire based on the lowest timestamp seen in the DoFn.
public class SetEventTimeTimerBasedOnEarliestElementTime {
private static final Logger LOG = LoggerFactory
.getLogger(SetEventTimeTimerBasedOnEarliestElementTime.class);
public static void main(String[] args) {
// Create pipeline
PipelineOptions options = PipelineOptionsFactory.
fromArgs(args).withValidation().as(PipelineOptions.class);
// We will start our timer at a fixed point
Instant now = Instant.parse("2000-01-01T00:00:00Z");
// ----- Create some dummy data
// Create 3 elements, incrementing by 1 minute
TimestampedValue<KV<String, Integer>> time_1 = TimestampedValue.of(KV.of("Key_A", 1), now);
TimestampedValue<KV<String, Integer>> time_2 = TimestampedValue
.of(KV.of("Key_A", 2), now.plus(Duration.standardMinutes(1)));
TimestampedValue<KV<String, Integer>> time_3 = TimestampedValue
.of(KV.of("Key_A", 3), now.plus(Duration.standardMinutes(2)));
Pipeline p = Pipeline.create(options);
// Apply a fixed window of duration 10 min and Sum the results
p.apply(Create.timestamped(time_3, time_2, time_1)).apply(
Window.<KV<String, Integer>>into(FixedWindows.<Integer>of(Duration.standardMinutes(10))))
.apply(ParDo.of(new StatefulDoFnThatSetTimerBasedOnSmallestTimeStamp()));
p.run();
}
/**
* Set timer to the lowest value that we see in the stateful DoFn
*/
public static class StatefulDoFnThatSetTimerBasedOnSmallestTimeStamp
extends DoFn<KV<String, Integer>, KV<String, Integer>> {
// Due to performance considerations there is no read on a timer object.
// We make use of this Long value to keep track.
#StateId("currentTimerValue") private final StateSpec<ValueState<Long>> currentTimerValue =
StateSpecs.value(BigEndianLongCoder.of());
#TimerId("timer") private final TimerSpec timer = TimerSpecs.timer(TimeDomain.EVENT_TIME);
#ProcessElement public void process(ProcessContext c,
#StateId("currentTimerValue") ValueState<Long> currentTimerValue,
#TimerId("timer") Timer timer) {
Instant timeStampWeWantToSet = c.timestamp();
//*********** Set Timer
// If the timer has never been set then we set it.
// If the timer has been set but is larger than our current value then we set it.
if (currentTimerValue.read() == null || timeStampWeWantToSet.getMillis() < currentTimerValue
.read()) {
timer.set(timeStampWeWantToSet);
currentTimerValue.write(timeStampWeWantToSet.getMillis());
}
}
#OnTimer("timer") public void onMinTimer(OnTimerContext otc,
#StateId("currentTimerValue") ValueState<Long> currentTimerValue,
#TimerId("timer") Timer timer) {
// Reset the currentTimerValue
currentTimerValue.clear();
LOG.info("Timer # {} fired", otc.timestamp());
}
}
}
I want to count total number of rows in a file.
Please explain your code if possible.
String fileAbsolutePath = "gs://sourav_bucket_dataflow/" + fileName;
PCollection<String> data = p.apply("Reading Data From File", TextIO.read().from(fileAbsolutePath));
PCollection<Long> count = data.apply(Count.<String>globally());
Now i want to get the value.
There are a variety of sinks that you can use to get data out of your pipeline. https://beam.apache.org/documentation/io/built-in/ has a list of the current built in IO transforms.
It sort of depends on what you want to do with that number. Assuming you want to use it in your future transformations, you may want to convert it to a PCollectionView object and pass it as a side input to other transformations.
PCollection<String> data = p.apply("Reading Data From File", TextIO.read().from(fileAbsolutePath));
PCollection<Long> count = data.apply(Count.<String>globally());
final PCollectionView<Long> view = count.apply(View.asSingleton());
A quick example to show you how to use the value as a side count:
data.apply(ParDo.of(new FuncFn(view)).withSideInputs(view));
Where:
class FuncFn extends DoFn<String,String>
{
private final PCollectionView<Long> mySideInput;
public FuncFn(PCollectionView<Long> mySideInput) {
this.mySideInput = mySideInput;
}
#ProcessElement
public void processElement(ProcessContext c) throws IOException
{
Long count = c.sideInput(mySideInput);
//other stuff you may want to do
}
}
Hope that helps!
where "input" in line 1 is the input. This will work.
PCollection<Long> number = input.apply(Count.globally());
number.apply(MapElements.via(new SimpleFunction<Long, Long>()
{
public Long apply(Long total)
{
System.out.println("Length is: " + total);
return total;
}
}));
I want to implement a game loop (that is acting as a server) in Erlang, but I dont know how to deal with the lack of incrementing variables.
What I want to do, described in Java code:
class Game {
int posX, posY;
int wall = 10;
int roof = 20;
public void newPos(x,y) {
if(!collision(x,y)) {
posX = x;
posY = y;
}
}
public boolean collision(x,y) {
if(x == wall || y == roof) {
// The player hit the wall.
return true;
}
return false;
}
public sendStateToClient() {
// Send posX and posY back to client
}
public static void main(String[] args) {
// The game loop
while(true) {
// Send current state to the client
sendStateToClient();
// Some delay here...
}
}
}
If the client moves, then the newPos() function is called. This function changes some coordinate variables if a collision do not occur. The game loop is going on forever and is just sending the current state back to the client so that the client can paint it.
Now I want to implement this logic in Erlang but I dont know where to begin. I can't set the variables posX and posY in the same way as here... My only thaught is some kind of recursive loop where the coordinates is arguments, but I don't know if that is the right way to go either...
Your intuition is correct: a recursive loop with the state as a parameter is the standard Erlang approach.
This concept is often abstracted away by using one of the server behaviors in OTP.
Simple example, may contain errors:
game_loop(X, Y) ->
receive
{moveto, {NewX, NewY}} ->
notifyClient(NewX, NewY),
game_loop(NewX, NewY)
end.
Is there a Streams equivalent to Observable.Throttle? If not -- is there any reasonably elegant way to achieve a similar effect?
There's no such method on streams for now. A enhancement request has been filed, you can star issue 8492.
However, you can do that with the where method. In the following exemple, I have defined a ThrottleFilter class to ignore events during a given duration :
import 'dart:async';
class ThrottleFilter<T> {
DateTime lastEventDateTime = null;
final Duration duration;
ThrottleFilter(this.duration);
bool call(T e) {
final now = new DateTime.now();
if (lastEventDateTime == null ||
now.difference(lastEventDateTime) > duration) {
lastEventDateTime = now;
return true;
}
return false;
}
}
main() {
final sc = new StreamController<int>();
final stream = sc.stream;
// filter stream with ThrottleFilter
stream.where(new ThrottleFilter<int>(const Duration(seconds: 10)).call)
.listen(print);
// send ints to stream every second, but ThrottleFilter will give only one int
// every 10 sec.
int i = 0;
new Timer.repeating(const Duration(seconds:1), (t) { sc.add(i++); });
}
The rate_limit package provides throttling and debouncing of Streams.
#Victor Savkin's answer is nice, but I always try to avoid reinventing the wheel. So unless you really only need that throttle I'd suggest using the RxDart package. Since you are dealing with Streams and other reactive objects RxDart has a lot of nice goodies to offer besides throttling.
We can achieve a 500 millisecond throttle several ways:
throttleTime from ThrottleExtensions<T> Stream<T> extensions: stream.throttleTime(Duration(milliseconds: 500)).listen(print);
Combining ThrottleStreamTransformer with TimerStream: stream.transform(ThrottleStreamTransformer((_) => TimerStream(true, const Duration(milliseconds: 500)))).listen(print);
Using Debounce Extensions / DebounceStreamTransformer: stream.debounceTime(Duration(milliseconds: 500)).listen(print);
There are some subtle differences regarding delays, but all of them throttles. As an example about throttleTime vs. debounceTime see What is the difference between throttleTime vs debounceTime in RxJS and when to choose which?
The following version is closer to what Observable.Throttle does:
class Throttle extends StreamEventTransformer {
final duration;
Timer lastTimer;
Throttle(millis) :
duration = new Duration(milliseconds : millis);
void handleData(event, EventSink<int> sink) {
if(lastTimer != null){
lastTimer.cancel();
}
lastTimer = new Timer(duration, () => sink.add(event));
}
}
main(){
//...
stream.transform(new Throttle(500)).listen((_) => print(_));
//..
}
lets say I have an ActivePivot cube with facts containing just Value, and Currency.
lets say my cube has Currency as a regular dimension.
We fill the cube with facts that have many currencies.
We have a forex service that takes the currency and reference currency to get a rate.
Now, Value.SUM doesn't make any sense, we are adding up values with different currencies, so we want to have a post processor that can convert all values to a reference currency, say, USD, then sum them, so we write a post processor that extends ADynamicAggregationPostProcessor, specify Currency as a leaf level dimension, and use the forex service to do the conversion, and we are happy.
But, lets say we don't want to convert just to USD, we want to convert to 10 different currencies and see the results next to each other on the screen.
So we create an Analysis dimension, say ReferenceCurrency, with 10 members.
My question is: how can I alter the above post processor to handle the Analysis dimension? The plain vanilla ADynamicAggregationPostProcessor does not handle Analysis dimensions, only the default member is visible to this post processor. Other post processors that handle Analysis dimensions, like DefaultAggregatePostProcessor do not have a means for specifying leaf levels, so I cannot get the aggregates by Currency, and so cannot do the forex conversion. How can I have my cake and eat it too?
It looks like you want to use two advanced features of ActivePivot at the same time (Analysis dimensions to expose several outcomes of the same aggregate, and dynamic aggregation to aggregate amounts expressed in different currencies).
Separately each one is fairly easy to setup through configuration and a few lines of code to inject. But to interleave both you will need to understand the internals of post processor evaluation, and inject business logic at the right places.
Here is an example based on ActivePivot 4.3.3. It has been written in the open-source Sandbox Application so that you can run it quickly before adapting it to your own project.
First we need a simple analysis dimension to hold the possible reference currencies:
package com.quartetfs.pivot.sandbox.postprocessor.impl;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Properties;
import java.util.Set;
import com.quartetfs.biz.pivot.cube.hierarchy.axis.impl.AAnalysisDimension;
import com.quartetfs.fwk.QuartetExtendedPluginValue;
/**
*
* An analysis dimension bearing the
* list of possible reference currencies.
*
* #author Quartet FS
*
*/
#QuartetExtendedPluginValue(interfaceName = "com.quartetfs.biz.pivot.cube.hierarchy.IDimension", key = ReferenceCurrencyDimension.TYPE)
public class ReferenceCurrencyDimension extends AAnalysisDimension {
/** serialVersionUID */
private static final long serialVersionUID = 42706811331081328L;
/** Default reference currency */
public static final String DEFAULT_CURRENCY = "EUR";
/** Static list of non-default possible reference currencies */
public static final List<Object[]> CURRENCIES;
static {
List<Object[]> currencies = new ArrayList<Object[]>();
currencies.add(new Object[] {"USD"});
currencies.add(new Object[] {"GBP"});
currencies.add(new Object[] {"JPY"});
CURRENCIES = Collections.unmodifiableList(currencies);
}
/** Plugin type */
public static final String TYPE = "REF_CCY";
/** Constructor */
public ReferenceCurrencyDimension(String name, int ordinal, Properties properties, Set<String> measureGroups) {
super(name, ordinal, properties, measureGroups);
}
#Override
public Object getDefaultDiscriminator(int levelOrdinal) { return DEFAULT_CURRENCY; }
#Override
public Collection<Object[]> buildDiscriminatorPaths() { return CURRENCIES; }
#Override
public int getLevelsCount() { return 1; }
#Override
public String getLevelName(int levelOrdinal) {
return levelOrdinal == 0 ? "Currency" : super.getLevelName(levelOrdinal);
}
#Override
public String getType() { return TYPE; }
}
Then the post processor itself, a customized dynamic aggregation post processor modified to handle the analysis dimension and output the same aggregate multiple times, one time per reference currency.
package com.quartetfs.pivot.sandbox.postprocessor.impl;
import java.util.List;
import java.util.Properties;
import com.quartetfs.biz.pivot.IActivePivot;
import com.quartetfs.biz.pivot.ILocation;
import com.quartetfs.biz.pivot.ILocationPattern;
import com.quartetfs.biz.pivot.aggfun.IAggregationFunction;
import com.quartetfs.biz.pivot.cellset.ICellSet;
import com.quartetfs.biz.pivot.cube.hierarchy.IDimension;
import com.quartetfs.biz.pivot.cube.hierarchy.axis.IAxisMember;
import com.quartetfs.biz.pivot.impl.Location;
import com.quartetfs.biz.pivot.postprocessing.impl.ADynamicAggregationPostProcessor;
import com.quartetfs.biz.pivot.postprocessing.impl.ADynamicAggregationProcedure;
import com.quartetfs.biz.pivot.query.IQueryCache;
import com.quartetfs.biz.pivot.query.aggregates.IAggregatesRetriever;
import com.quartetfs.biz.pivot.query.aggregates.RetrievalException;
import com.quartetfs.fwk.QuartetException;
import com.quartetfs.fwk.QuartetExtendedPluginValue;
import com.quartetfs.pivot.sandbox.service.impl.ForexService;
import com.quartetfs.tech.type.IDataType;
import com.quartetfs.tech.type.impl.DoubleDataType;
/**
* Forex post processor with two features:
* <ul>
* <li>Dynamically aggregates amounts in their native currencies into reference currency
* <li>Applies several reference currencies, exploded along an analysis dimension.
* </ul>
*
* #author Quartet FS
*/
#QuartetExtendedPluginValue(interfaceName = "com.quartetfs.biz.pivot.postprocessing.IPostProcessor", key = ForexPostProcessor.TYPE)
public class ForexPostProcessor extends ADynamicAggregationPostProcessor<Double> {
/** serialVersionUID */
private static final long serialVersionUID = 15874126988574L;
/** post processor plugin type */
public final static String TYPE = "FOREX";
/** Post processor return type */
private static final IDataType<Double> DATA_TYPE = new DoubleDataType();
/** Ordinal of the native currency dimension */
protected int nativeCurrencyDimensionOrdinal;
/** Ordinal of the native currency level */
protected int nativeCurrencyLevelOrdinal;
/** Ordinal of the reference currencies dimension */
protected int referenceCurrenciesOrdinal;
/** forex service*/
private ForexService forexService;
/** constructor */
public ForexPostProcessor(String name, IActivePivot pivot) {
super(name, pivot);
}
/** Don't forget to inject the Forex service into the post processor */
public void setForexService(ForexService forexService) {
this.forexService = forexService;
}
/** post processor initialization */
#Override
public void init(Properties properties) throws QuartetException {
super.init(properties);
nativeCurrencyDimensionOrdinal = leafLevelsOrdinals.get(0)[0];
nativeCurrencyLevelOrdinal = leafLevelsOrdinals.get(0)[1];
IDimension referenceCurrenciesDimension = getDimension("ReferenceCurrencies");
referenceCurrenciesOrdinal = referenceCurrenciesDimension.getOrdinal();
}
/**
* Handling of the analysis dimension:<br>
* Before retrieving leaves, wildcard the reference currencies dimension.
*/
protected ICellSet retrieveLeaves(ILocation location, IAggregatesRetriever retriever) throws RetrievalException {
ILocation baseLocation = location;
if(location.getLevelDepth(referenceCurrenciesOrdinal-1) > 0) {
Object[][] array = location.arrayCopy();
array[referenceCurrenciesOrdinal-1][0] = null; // wildcard
baseLocation = new Location(array);
}
return super.retrieveLeaves(baseLocation, retriever);
}
/**
* Perform the evaluation of the post processor on a leaf (as defined in the properties).
* Here the leaf level is the UnderlierCurrency level in the Underlyings dimension .
*/
#Override
protected Double doLeafEvaluation(ILocation leafLocation, Object[] underlyingMeasures) throws QuartetException {
// Extract the native and reference currencies from the evaluated location
String currency = (String) leafLocation.getCoordinate(nativeCurrencyDimensionOrdinal-1, nativeCurrencyLevelOrdinal);
String refCurrency = (String) leafLocation.getCoordinate(referenceCurrenciesOrdinal-1, 0);
// Retrieve the measure in the native currency
double nativeAmount = (Double) underlyingMeasures[0];
// If currency is reference currency or measureNative is equal to 0.0 no need to convert
if ((currency.equals(refCurrency)) || (nativeAmount == .0) ) return nativeAmount;
// Retrieve the rate and rely on the IQueryCache
// in order to retrieve the same rate for the same currency for our query
IQueryCache queryCache = pivot.getContext().get(IQueryCache.class);
Double rate = (Double) queryCache.get(currency + "_" + refCurrency);
if(rate == null) {
Double rateRetrieved = forexService.retrieveQuotation(currency, refCurrency);
Double rateCached = (Double) queryCache.putIfAbsent(currency + "_" + refCurrency, rateRetrieved);
rate = rateCached == null ? rateRetrieved : rateCached;
}
// Compute equivalent in reference currency
return rate == null ? nativeAmount : nativeAmount * rate;
}
#Override
protected IDataType<Double> getDataType() { return DATA_TYPE; }
/** #return the type of this post processor, within the post processor extended plugin. */
#Override
public String getType() { return TYPE; }
/**
* #return our own custom dynamic aggregation procedure,
* so that we can inject our business logic.
*/
protected DynamicAggregationProcedure createProcedure(ICellSet cellSet, IAggregationFunction aggregationFunction, ILocationPattern pattern) {
return new DynamicAggregationProcedure(cellSet, aggregationFunction, pattern);
}
/**
* Custom dynamic aggregation procedure.<br>
* When the procedure is executed over a leaf location,
* we produce several aggregates instead of only one:
* one aggregate for each of the visible reference currencies.
*/
protected class DynamicAggregationProcedure extends ADynamicAggregationProcedure<Double> {
protected DynamicAggregationProcedure(ICellSet cellSet, IAggregationFunction aggregationFunction, ILocationPattern pattern) {
super(ForexPostProcessor.this, aggregationFunction, cellSet, pattern);
}
/**
* Execute the procedure over one row of the leaf cell set.
* We compute one aggregate for each of the reference currencies.
*/
#Override
public boolean execute(ILocation location, int rowId, Object[] measures) {
if(location.getLevelDepth(referenceCurrenciesOrdinal-1) > 0) {
// Lookup the visible reference currencies
IDimension referenceCurrenciesDimension = pivot.getDimensions().get(referenceCurrenciesOrdinal);
List<IAxisMember> referenceCurrencies = (List<IAxisMember>) referenceCurrenciesDimension.retrieveMembers(0);
for(IAxisMember member : referenceCurrencies) {
Object[][] array = location.arrayCopy();
array[referenceCurrenciesOrdinal-1][0] = member.getDiscriminator();
ILocation loc = new Location(array);
super.execute(loc, rowId, measures);
}
return true;
} else {
return super.execute(location, rowId, measures);
}
}
#Override
protected Double doLeafEvaluation(ILocation location, Object[] measures) throws QuartetException {
return ForexPostProcessor.this.doLeafEvaluation(location, measures);
}
}
}
In the description of your cube, the analysis dimension and the post processor would be exposed like this:
...
<dimension name="ReferenceCurrencies" pluginKey="REF_CCY" />
...
<measure name="cross" isIntrospectionMeasure="false">
<postProcessor pluginKey="FOREX">
<properties>
<entry key="id" value="pv.SUM" />
<entry key="underlyingMeasures" value="pv.SUM" />
<entry key="leafLevels" value="UnderlierCurrency#Underlyings" />
</properties>
</postProcessor>
</measure>
...
Analysis Dimensions brings so much complexity they should be considered apart of the others features of your cube. One way to handle your issue is then:
Add a first measure expanding properly along the analysis dimension. In your case, it will simply copy the underlying measure along ReferenceCurrency, optionally doing the FX conversion. This measure may be used as underlying of several measures.
Add a second measure, based on usual dynamic aggregation. This second implementation is very simple as it does not know there is an analysis dimension.