ActivePivot leaf level aggregation and Analysis dimension - analysis

lets say I have an ActivePivot cube with facts containing just Value, and Currency.
lets say my cube has Currency as a regular dimension.
We fill the cube with facts that have many currencies.
We have a forex service that takes the currency and reference currency to get a rate.
Now, Value.SUM doesn't make any sense, we are adding up values with different currencies, so we want to have a post processor that can convert all values to a reference currency, say, USD, then sum them, so we write a post processor that extends ADynamicAggregationPostProcessor, specify Currency as a leaf level dimension, and use the forex service to do the conversion, and we are happy.
But, lets say we don't want to convert just to USD, we want to convert to 10 different currencies and see the results next to each other on the screen.
So we create an Analysis dimension, say ReferenceCurrency, with 10 members.
My question is: how can I alter the above post processor to handle the Analysis dimension? The plain vanilla ADynamicAggregationPostProcessor does not handle Analysis dimensions, only the default member is visible to this post processor. Other post processors that handle Analysis dimensions, like DefaultAggregatePostProcessor do not have a means for specifying leaf levels, so I cannot get the aggregates by Currency, and so cannot do the forex conversion. How can I have my cake and eat it too?

It looks like you want to use two advanced features of ActivePivot at the same time (Analysis dimensions to expose several outcomes of the same aggregate, and dynamic aggregation to aggregate amounts expressed in different currencies).
Separately each one is fairly easy to setup through configuration and a few lines of code to inject. But to interleave both you will need to understand the internals of post processor evaluation, and inject business logic at the right places.
Here is an example based on ActivePivot 4.3.3. It has been written in the open-source Sandbox Application so that you can run it quickly before adapting it to your own project.
First we need a simple analysis dimension to hold the possible reference currencies:
package com.quartetfs.pivot.sandbox.postprocessor.impl;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Properties;
import java.util.Set;
import com.quartetfs.biz.pivot.cube.hierarchy.axis.impl.AAnalysisDimension;
import com.quartetfs.fwk.QuartetExtendedPluginValue;
/**
*
* An analysis dimension bearing the
* list of possible reference currencies.
*
* #author Quartet FS
*
*/
#QuartetExtendedPluginValue(interfaceName = "com.quartetfs.biz.pivot.cube.hierarchy.IDimension", key = ReferenceCurrencyDimension.TYPE)
public class ReferenceCurrencyDimension extends AAnalysisDimension {
/** serialVersionUID */
private static final long serialVersionUID = 42706811331081328L;
/** Default reference currency */
public static final String DEFAULT_CURRENCY = "EUR";
/** Static list of non-default possible reference currencies */
public static final List<Object[]> CURRENCIES;
static {
List<Object[]> currencies = new ArrayList<Object[]>();
currencies.add(new Object[] {"USD"});
currencies.add(new Object[] {"GBP"});
currencies.add(new Object[] {"JPY"});
CURRENCIES = Collections.unmodifiableList(currencies);
}
/** Plugin type */
public static final String TYPE = "REF_CCY";
/** Constructor */
public ReferenceCurrencyDimension(String name, int ordinal, Properties properties, Set<String> measureGroups) {
super(name, ordinal, properties, measureGroups);
}
#Override
public Object getDefaultDiscriminator(int levelOrdinal) { return DEFAULT_CURRENCY; }
#Override
public Collection<Object[]> buildDiscriminatorPaths() { return CURRENCIES; }
#Override
public int getLevelsCount() { return 1; }
#Override
public String getLevelName(int levelOrdinal) {
return levelOrdinal == 0 ? "Currency" : super.getLevelName(levelOrdinal);
}
#Override
public String getType() { return TYPE; }
}
Then the post processor itself, a customized dynamic aggregation post processor modified to handle the analysis dimension and output the same aggregate multiple times, one time per reference currency.
package com.quartetfs.pivot.sandbox.postprocessor.impl;
import java.util.List;
import java.util.Properties;
import com.quartetfs.biz.pivot.IActivePivot;
import com.quartetfs.biz.pivot.ILocation;
import com.quartetfs.biz.pivot.ILocationPattern;
import com.quartetfs.biz.pivot.aggfun.IAggregationFunction;
import com.quartetfs.biz.pivot.cellset.ICellSet;
import com.quartetfs.biz.pivot.cube.hierarchy.IDimension;
import com.quartetfs.biz.pivot.cube.hierarchy.axis.IAxisMember;
import com.quartetfs.biz.pivot.impl.Location;
import com.quartetfs.biz.pivot.postprocessing.impl.ADynamicAggregationPostProcessor;
import com.quartetfs.biz.pivot.postprocessing.impl.ADynamicAggregationProcedure;
import com.quartetfs.biz.pivot.query.IQueryCache;
import com.quartetfs.biz.pivot.query.aggregates.IAggregatesRetriever;
import com.quartetfs.biz.pivot.query.aggregates.RetrievalException;
import com.quartetfs.fwk.QuartetException;
import com.quartetfs.fwk.QuartetExtendedPluginValue;
import com.quartetfs.pivot.sandbox.service.impl.ForexService;
import com.quartetfs.tech.type.IDataType;
import com.quartetfs.tech.type.impl.DoubleDataType;
/**
* Forex post processor with two features:
* <ul>
* <li>Dynamically aggregates amounts in their native currencies into reference currency
* <li>Applies several reference currencies, exploded along an analysis dimension.
* </ul>
*
* #author Quartet FS
*/
#QuartetExtendedPluginValue(interfaceName = "com.quartetfs.biz.pivot.postprocessing.IPostProcessor", key = ForexPostProcessor.TYPE)
public class ForexPostProcessor extends ADynamicAggregationPostProcessor<Double> {
/** serialVersionUID */
private static final long serialVersionUID = 15874126988574L;
/** post processor plugin type */
public final static String TYPE = "FOREX";
/** Post processor return type */
private static final IDataType<Double> DATA_TYPE = new DoubleDataType();
/** Ordinal of the native currency dimension */
protected int nativeCurrencyDimensionOrdinal;
/** Ordinal of the native currency level */
protected int nativeCurrencyLevelOrdinal;
/** Ordinal of the reference currencies dimension */
protected int referenceCurrenciesOrdinal;
/** forex service*/
private ForexService forexService;
/** constructor */
public ForexPostProcessor(String name, IActivePivot pivot) {
super(name, pivot);
}
/** Don't forget to inject the Forex service into the post processor */
public void setForexService(ForexService forexService) {
this.forexService = forexService;
}
/** post processor initialization */
#Override
public void init(Properties properties) throws QuartetException {
super.init(properties);
nativeCurrencyDimensionOrdinal = leafLevelsOrdinals.get(0)[0];
nativeCurrencyLevelOrdinal = leafLevelsOrdinals.get(0)[1];
IDimension referenceCurrenciesDimension = getDimension("ReferenceCurrencies");
referenceCurrenciesOrdinal = referenceCurrenciesDimension.getOrdinal();
}
/**
* Handling of the analysis dimension:<br>
* Before retrieving leaves, wildcard the reference currencies dimension.
*/
protected ICellSet retrieveLeaves(ILocation location, IAggregatesRetriever retriever) throws RetrievalException {
ILocation baseLocation = location;
if(location.getLevelDepth(referenceCurrenciesOrdinal-1) > 0) {
Object[][] array = location.arrayCopy();
array[referenceCurrenciesOrdinal-1][0] = null; // wildcard
baseLocation = new Location(array);
}
return super.retrieveLeaves(baseLocation, retriever);
}
/**
* Perform the evaluation of the post processor on a leaf (as defined in the properties).
* Here the leaf level is the UnderlierCurrency level in the Underlyings dimension .
*/
#Override
protected Double doLeafEvaluation(ILocation leafLocation, Object[] underlyingMeasures) throws QuartetException {
// Extract the native and reference currencies from the evaluated location
String currency = (String) leafLocation.getCoordinate(nativeCurrencyDimensionOrdinal-1, nativeCurrencyLevelOrdinal);
String refCurrency = (String) leafLocation.getCoordinate(referenceCurrenciesOrdinal-1, 0);
// Retrieve the measure in the native currency
double nativeAmount = (Double) underlyingMeasures[0];
// If currency is reference currency or measureNative is equal to 0.0 no need to convert
if ((currency.equals(refCurrency)) || (nativeAmount == .0) ) return nativeAmount;
// Retrieve the rate and rely on the IQueryCache
// in order to retrieve the same rate for the same currency for our query
IQueryCache queryCache = pivot.getContext().get(IQueryCache.class);
Double rate = (Double) queryCache.get(currency + "_" + refCurrency);
if(rate == null) {
Double rateRetrieved = forexService.retrieveQuotation(currency, refCurrency);
Double rateCached = (Double) queryCache.putIfAbsent(currency + "_" + refCurrency, rateRetrieved);
rate = rateCached == null ? rateRetrieved : rateCached;
}
// Compute equivalent in reference currency
return rate == null ? nativeAmount : nativeAmount * rate;
}
#Override
protected IDataType<Double> getDataType() { return DATA_TYPE; }
/** #return the type of this post processor, within the post processor extended plugin. */
#Override
public String getType() { return TYPE; }
/**
* #return our own custom dynamic aggregation procedure,
* so that we can inject our business logic.
*/
protected DynamicAggregationProcedure createProcedure(ICellSet cellSet, IAggregationFunction aggregationFunction, ILocationPattern pattern) {
return new DynamicAggregationProcedure(cellSet, aggregationFunction, pattern);
}
/**
* Custom dynamic aggregation procedure.<br>
* When the procedure is executed over a leaf location,
* we produce several aggregates instead of only one:
* one aggregate for each of the visible reference currencies.
*/
protected class DynamicAggregationProcedure extends ADynamicAggregationProcedure<Double> {
protected DynamicAggregationProcedure(ICellSet cellSet, IAggregationFunction aggregationFunction, ILocationPattern pattern) {
super(ForexPostProcessor.this, aggregationFunction, cellSet, pattern);
}
/**
* Execute the procedure over one row of the leaf cell set.
* We compute one aggregate for each of the reference currencies.
*/
#Override
public boolean execute(ILocation location, int rowId, Object[] measures) {
if(location.getLevelDepth(referenceCurrenciesOrdinal-1) > 0) {
// Lookup the visible reference currencies
IDimension referenceCurrenciesDimension = pivot.getDimensions().get(referenceCurrenciesOrdinal);
List<IAxisMember> referenceCurrencies = (List<IAxisMember>) referenceCurrenciesDimension.retrieveMembers(0);
for(IAxisMember member : referenceCurrencies) {
Object[][] array = location.arrayCopy();
array[referenceCurrenciesOrdinal-1][0] = member.getDiscriminator();
ILocation loc = new Location(array);
super.execute(loc, rowId, measures);
}
return true;
} else {
return super.execute(location, rowId, measures);
}
}
#Override
protected Double doLeafEvaluation(ILocation location, Object[] measures) throws QuartetException {
return ForexPostProcessor.this.doLeafEvaluation(location, measures);
}
}
}
In the description of your cube, the analysis dimension and the post processor would be exposed like this:
...
<dimension name="ReferenceCurrencies" pluginKey="REF_CCY" />
...
<measure name="cross" isIntrospectionMeasure="false">
<postProcessor pluginKey="FOREX">
<properties>
<entry key="id" value="pv.SUM" />
<entry key="underlyingMeasures" value="pv.SUM" />
<entry key="leafLevels" value="UnderlierCurrency#Underlyings" />
</properties>
</postProcessor>
</measure>
...

Analysis Dimensions brings so much complexity they should be considered apart of the others features of your cube. One way to handle your issue is then:
Add a first measure expanding properly along the analysis dimension. In your case, it will simply copy the underlying measure along ReferenceCurrency, optionally doing the FX conversion. This measure may be used as underlying of several measures.
Add a second measure, based on usual dynamic aggregation. This second implementation is very simple as it does not know there is an analysis dimension.

Related

How to combine different mono and use the combined result with error handling?

I have a scenario where i need to use different mono which could return me errors and set map values to null if error is returned.
Ex:
Mono<A> a=Some api call;
Mono<A> b=Some api giving error;
Mono<A> c=Some api call;
Now i want to set the resulting response to map
Map<String,A> m=new HashMap<>();
m.put("a",a);
m.put("b",null);
m.put("c",c);
Can anyone help on how to do all this in reactive non blocking way.
I tried zip but it will not execute if any of the api return error or if i use onErrorReturn(null).
Thanks in advance
To solve your problems, you will have to use some tricks. The problem is that :
Giving an empty mono or mono that ends in error cancel zip operation (source: Mono#zip javadoc)
Reactive streams do not allow null values (source: Reactive stream spec, table 2: Subscribers, bullet 13)
Also, note that putting a null value in a hash map is the same as cancelling any previous value associated with the key (it's important in case you're updating an existing map).
Now, to bypass your problem, you can add an abstraction layer, and wrap your values in domain objects.
You can have an object that represents a query, another a valid result, and the last one will mirror an error.
With that, you can design publishers that will always succeed with non null values.
That's a technic used a lot in functional programming : common errors are part of the (one possible) result value.
Now, let's see the example that create a new Map from multiple Monos:
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import java.time.Duration;
import java.util.Map;
public class BypassMonoError {
/**
* An object identified by a key. It serves to track which key to associate to computed values
* #param <K> Type of the key
*/
static class Identified<K> {
protected final K id;
Identified(K id) {
this.id = id;
}
public K getId() {
return id;
}
}
/**
* Describe the result value of an operation, along with the key associated to it.
*
* #param <K> Type of the identifier of the result
* #param <V> Value type
*/
static abstract class Result<K, V> extends Identified<K> {
Result(K id) {
super(id);
}
/**
*
* #return Computed value on success, or null if the operation has failed. Note that here, we cannot tell from
* a success returning a null value or an error
*/
abstract V getOrNull();
}
static final class Success<K, V> extends Result<K, V> {
private final V value;
Success(K id, V value) {
super(id);
this.value = value;
}
#Override
V getOrNull() {
return value;
}
}
static final class Error<K, V> extends Result<K, V> {
private final Exception error;
Error(K id, Exception error) {
super(id);
this.error = error;
}
#Override
V getOrNull() {
return null;
}
public Exception getError() {
return error;
}
}
/**
* A request that can asynchronously generate a result for the associated identifier.
*/
static class Query<K, V> extends Identified<K> {
private final Mono<V> worker;
Query(K id, Mono<V> worker) {
super(id);
this.worker = worker;
}
/**
* #return The operator that computes the result value. Note that any error is silently wrapped in an
* {#link Error empty result with error metadata}.
*/
public Mono<Result<K, V>> runCatching() {
return worker.<Result<K, V>>map(success -> new Success<>(id, success))
.onErrorResume(Exception.class, error -> Mono.just(new Error<K, V>(id, error)));
}
}
public static void main(String[] args) {
final Flux<Query<String, String>> queries = Flux.just(
new Query("a", Mono.just("A")),
new Query("b", Mono.error(new Exception("B"))),
new Query("c", Mono.delay(Duration.ofSeconds(1)).map(v -> "C"))
);
final Flux<Result<String, String>> results = queries.flatMap(query -> query.runCatching());
final Map<String, String> myMap = results.collectMap(Result::getId, Result::getOrNull)
.block();
for (Map.Entry<String, String> entry : myMap.entrySet()) {
System.out.printf("%s -> %s%n", entry.getKey(), entry.getValue());
}
}
}
Note : In the above example, we silently ignore any occurred error. However, when using the flux, you can test if a result is an error, and if it is, you are free to design your own error management (log, fail-first, send in another flux, etc.).
This outputs:
a -> A
b -> null
c -> C

Save a list in a property in an entity in xodus

I have not found how to save a general list of primitive types, e.g. ints, or strings, in a property of an entity. I might have missed something obvious...
https://github.com/JetBrains/xodus/wiki/Entity-Stores described that "only Java primitive types, Strings, and ComparableSet values can be used by default".
It seems not hard to convert an Iterable into a ComparableSet. However it is a Set.
I will take a look into PersistentEntityStore.registerCustomPropertyType() to see if that helps. I just feel wrong to do that to just save a list of integers.
Links seemed to be able to serve as a way of saving a list of Entitys. But it seems there is no addProperty() counterpart to addLink().
Appreciated if some one can share a way or a workaround for this, or maybe why this is not supported.
Thanks
As mentioned in the comments, one workaround I came up with was to create a ComparableList, by adopting code from ComparableSet.
The idea was to make a list that is able to convert to and from an ArrayByteIterable, and register it with .registerCustomPropertyType(). To do that, 2 classes are needed, ComparableList and ComparableListBinding. I'm sharing one iteration I used at the bottom. By the way, I made them immutable, comparing to the mutable ComparableSet. The newly implemented types should be registered once in a transaction of the store before using them.
That should allow you to store and retrieve a list. However the items in ComparableList would not get indexed as they would when saved in a ComparableSet -- there are some special treatment for ComparableSet in the entity store implementation. So without modifying the library, indexing would work only with hacks like creating another property to just index the values.
I was considering to implement a different entity store that could better support lists, on top of the xodus key-value store, bypassing the xodus entity store entirely. That might be a better solution to the list issue we are talking about here.
ComparableList:
#SuppressWarnings("unchecked")
public class ComparableList<T extends Comparable<T>> implements Comparable<ComparableList<T>>,
Iterable<T> {
#Nonnull
private final ImmutableList<T> list;
public ComparableList(#Nonnull final Iterable<T> iterable) {
list = ImmutableList.copyOf(iterable);
}
#Override
public int compareTo(#Nonnull final ComparableList<T> other) {
final Iterator<T> thisIt = list.iterator();
final Iterator<T> otherIt = other.list.iterator();
while (thisIt.hasNext() && otherIt.hasNext()) {
final int cmp = thisIt.next().compareTo(otherIt.next());
if (cmp != 0) {
return cmp;
}
}
if (thisIt.hasNext()) {
return 1;
}
if (otherIt.hasNext()) {
return -1;
}
return 0;
}
#NotNull
#Override
public Iterator<T> iterator() {
return list.iterator();
}
#Nullable
public Class<T> getItemClass() {
final Iterator<T> it = list.iterator();
return it.hasNext() ? (Class<T>) it.next().getClass() : null;
}
#Override
public String toString() {
return list.toString();
}
}
ComparableListBinding:
#SuppressWarnings({"unchecked", "rawtypes"})
public class ComparableListBinding extends ComparableBinding {
public static final ComparableListBinding INSTANCE = new ComparableListBinding();
private ComparableListBinding() {}
#Override
public ComparableList readObject(#NotNull final ByteArrayInputStream stream) {
final int valueTypeId = stream.read() ^ 0x80;
final ComparableBinding itemBinding = ComparableValueType.getPredefinedBinding(valueTypeId);
final ImmutableList.Builder<Comparable> builder = ImmutableList.builder();
while (stream.available() > 0) {
builder.add(itemBinding.readObject(stream));
}
return new ComparableList(builder.build());
}
#Override
public void writeObject(#NotNull final LightOutputStream output,
#NotNull final Comparable object) {
final ComparableList<? extends Comparable> list = (ComparableList) object;
final Class itemClass = list.getItemClass();
if (itemClass == null) {
throw new ExodusException("Attempt to write empty ComparableList");
}
final ComparableValueType type = ComparableValueType.getPredefinedType(itemClass);
output.writeByte(type.getTypeId());
final ComparableBinding itemBinding = type.getBinding();
list.forEach(o -> itemBinding.writeObject(output, o));
}
/**
* De-serializes {#linkplain ByteIterable} entry to a {#code ComparableList} value.
*
* #param entry {#linkplain ByteIterable} instance
* #return de-serialized value
*/
public static ComparableList entryToComparableList(#NotNull final ByteIterable entry) {
return (ComparableList) INSTANCE.entryToObject(entry);
}
/**
* Serializes {#code ComparableList} value to the {#linkplain ArrayByteIterable} entry.
*
* #param object value to serialize
* #return {#linkplain ArrayByteIterable} entry
*/
public static ArrayByteIterable comparableSetToEntry(#NotNull final ComparableList object) {
return INSTANCE.objectToEntry(object);
}
}

How to Batch By N Elements in Streaming Pipeline With Small Bundles?

I've implemented batching by N elements as described in this answer:
Can datastore input in google dataflow pipeline be processed in a batch of N entries at a time?
package com.example.dataflow.transform;
import com.example.dataflow.event.ClickEvent;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.windowing.GlobalWindow;
import org.joda.time.Instant;
import java.util.ArrayList;
import java.util.List;
public class ClickToClicksPack extends DoFn> {
public static final int BATCH_SIZE = 10;
private List accumulator;
#StartBundle
public void startBundle() {
accumulator = new ArrayList(BATCH_SIZE);
}
#ProcessElement
public void processElement(ProcessContext c) {
ClickEvent clickEvent = c.element();
accumulator.add(clickEvent);
if (accumulator.size() >= BATCH_SIZE) {
c.output(accumulator);
accumulator = new ArrayList(BATCH_SIZE);
}
}
#FinishBundle
public void finishBundle(FinishBundleContext c) {
if (accumulator.size() > 0) {
ClickEvent clickEvent = accumulator.get(0);
long time = clickEvent.getClickTimestamp().getTime();
c.output(accumulator, new Instant(time), GlobalWindow.INSTANCE);
}
}
}
But when I run pipeline in streaming mode there are a lot of batches with just 1 or 2 elements. As I understand it's because of small bundles size. After running for a day average number of elements in batch is roughly 4. I really need it to be closer to 10 for better performance of the next steps.
Is there a way to control bundles size?
Or should I use "GroupIntoBatches" transform for this purpose. In this case it's not clear for me, what should be selected as a key.
UPDATE:
is it a good idea to use java thread id or VM hostname for a key to apply "GroupIntoBatches" transform?
I've ended up doing composite transform with "GroupIntoBatches" inside.
The following answer contains recommendations regarding key selection:
https://stackoverflow.com/a/44956702/4888849
In my current implementation I'm using random keys to achieve parallelism and I'm windowing events in order to emit results regularly even if there are less then BATCH_SIZE events by one key.
package com.example.dataflow.transform;
import com.example.dataflow.event.ClickEvent;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.GroupIntoBatches;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.windowing.FixedWindows;
import org.apache.beam.sdk.transforms.windowing.Window;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;
import org.joda.time.Duration;
import java.util.Random;
/**
* Batch clicks into packs of BATCH_SIZE size
*/
public class ClickToClicksPack extends PTransform, PCollection>> {
public static final int BATCH_SIZE = 10;
// Define window duration.
// After window's end - elements are emitted even if there are less then BATCH_SIZE elements
public static final int WINDOW_DURATION_SECONDS = 1;
private static final int DEFAULT_SHARDS_NUMBER = 20;
// Determine possible parallelism level
private int shardsNumber = DEFAULT_SHARDS_NUMBER;
public ClickToClicksPack() {
super();
}
public ClickToClicksPack(int shardsNumber) {
super();
this.shardsNumber = shardsNumber;
}
#Override
public PCollection> expand(PCollection input) {
return input
// assign keys, as "GroupIntoBatches" works only with key-value pairs
.apply(ParDo.of(new AssignRandomKeys(shardsNumber)))
.apply(Window.into(FixedWindows.of(Duration.standardSeconds(WINDOW_DURATION_SECONDS))))
.apply(GroupIntoBatches.ofSize(BATCH_SIZE))
.apply(ParDo.of(new ExtractValues()));
}
/**
* Assigns to clicks random integer between zero and shardsNumber
*/
private static class AssignRandomKeys extends DoFn> {
private int shardsNumber;
private Random random;
AssignRandomKeys(int shardsNumber) {
super();
this.shardsNumber = shardsNumber;
}
#Setup
public void setup() {
random = new Random();
}
#ProcessElement
public void processElement(ProcessContext c) {
ClickEvent clickEvent = c.element();
KV kv = KV.of(random.nextInt(shardsNumber), clickEvent);
c.output(kv);
}
}
/**
* Extract values from KV
*/
private static class ExtractValues extends DoFn>, Iterable> {
#ProcessElement
public void processElement(ProcessContext c) {
KV> kv = c.element();
c.output(kv.getValue());
}
}
}

Displaying Many Jira issues using Issue Navigator

My question is similar to "Displaying Jira issues using Issue Navigator" But my concern is that sometimes my list of issues is quite long and providing the user with a link to the Issue Navigator only works if my link is shorter than the max length of URLs.
Is there another way? Cookies? POST data? Perhaps programmatically creating and sharing a filter on the fly, and returning a link to the Issue Navigator that uses this filter? (But at some point I'd want to delete these filters so I don't have so many lying around.)
I do have the JQL query for getting the same list of issues, but it takes a very (very) long time to run and my code has already done the work of finding out what the result is -- I don't want the user to wait twice for the same thing (once while I'm generating my snazzy graphical servlet view and a second time when they want to see the same results in the Issue Navigator).
The easiest way to implement this is to ensure that the list of issues is cached somewhere within one of your application components. Each separate list of issues should be identified by its own unique ID (the magicKey below) that you define yourself.
You would then write your own JQL function that looks up your pre-calculated list of issues by that magic key, which then converts the list of issues into the format that the Issue Navigator requires.
The buildUriEncodedJqlQuery() method can be used to create such an example JQL query dynamically. For example, if the magic key is 1234, it would yield this JQL query: issue in myJqlFunction(1234).
You'd then feed the user a URL that looks like this:
String url = "/secure/IssueNavigator.jspa?mode=hide&reset=true&jqlQuery=" + MyJqlFunction.buildUriEncodedJqlQuery(magicKey);
The end result is that the user will be placed in the Issue Navigator looking at exactly the list of issues your code has provided.
This code also specifically prevents the user from saving your JQL function as part of a saved filter, since it's assumed that the issue cache in your application will not be permanent. If that's not correct, you will want to empty out the sanitiseOperand part.
In atlassian-plugins.xml:
<jql-function key="myJqlFunction" name="My JQL Function"
class="com.mycompany.MyJqlFunction">
</jql-function>
JQL function:
import com.atlassian.crowd.embedded.api.User;
import com.atlassian.jira.JiraDataType;
import com.atlassian.jira.JiraDataTypes;
import com.atlassian.jira.jql.operand.QueryLiteral;
import com.atlassian.jira.jql.query.QueryCreationContext;
import com.atlassian.jira.plugin.jql.function.ClauseSanitisingJqlFunction;
import com.atlassian.jira.plugin.jql.function.JqlFunction;
import com.atlassian.jira.plugin.jql.function.JqlFunctionModuleDescriptor;
import com.atlassian.jira.util.MessageSet;
import com.atlassian.jira.util.MessageSetImpl;
import com.atlassian.jira.util.NotNull;
import com.atlassian.query.clause.TerminalClause;
import com.atlassian.query.operand.FunctionOperand;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class MyJqlFunction implements JqlFunction, ClauseSanitisingJqlFunction
{
private static final String JQL_FUNCTION_NAME = "myJqlFunctionName";
private static final int JQL_FUNCTION_MIN_ARG_COUNT = 1;
private static final int JQL_FUNCTION_MAGIC_KEY_INDEX = 0;
public MyJqlFunction()
{
// inject your app's other components here
}
#Override
public void init(#NotNull JqlFunctionModuleDescriptor moduleDescriptor)
{
}
#Override
public JiraDataType getDataType()
{
return JiraDataTypes.ISSUE;
}
#Override
public String getFunctionName()
{
return JQL_FUNCTION_NAME;
}
#Override
public int getMinimumNumberOfExpectedArguments()
{
return JQL_FUNCTION_MIN_ARG_COUNT;
}
#Override
public boolean isList()
{
return true;
}
/**
* This function generates a URL-escaped JQL query that corresponds to the supplied magic key.
*
* #param magicKey
* #return
*/
public static String buildUriEncodedJqlQuery(String magicKey)
{
return "issue%20in%20" + JQL_FUNCTION_NAME + "(%22"
+ magicKey + "%22%)";
}
#Override
public List<QueryLiteral> getValues(#NotNull QueryCreationContext queryCreationContext,
#NotNull FunctionOperand operand,
#NotNull TerminalClause terminalClause)
{
User searchUser = queryCreationContext.getUser();
MessageSet messages = new MessageSetImpl();
List<QueryLiteral> values = internalGetValues(searchUser,
operand,
terminalClause,
messages,
!queryCreationContext.isSecurityOverriden());
return values;
}
private List<QueryLiteral> internalGetValues(#NotNull User searchUser,
#NotNull FunctionOperand operand,
#NotNull TerminalClause terminalClause,
#NotNull MessageSet messages,
#NotNull Boolean checkSecurity)
{
List<QueryLiteral> result = new ArrayList<QueryLiteral>();
if (searchUser==null)
{
// handle anon user
}
List<String> args = operand.getArgs();
if (wasSanitised(args))
{
messages.addErrorMessage("this function can't be used as part of a saved filter etc");
return result;
}
if (args.size() < getMinimumNumberOfExpectedArguments())
{
messages.addErrorMessage("too few arguments, etc");
return result;
}
final String magicKey = args.get(JQL_FUNCTION_MAGIC_KEY_INDEX);
// You need to implement this part yourself! This is where you use the supplied
// magicKey to fetch a list of issues from your own internal data source.
List<String> myIssueKeys = myCache.get(magicKey);
for (String id : myIssueKeys)
{
result.add(new QueryLiteral(operand, id));
}
return result;
}
#Override
public MessageSet validate(User searcher, #NotNull FunctionOperand operand, #NotNull TerminalClause terminalClause)
{
MessageSet messages = new MessageSetImpl();
internalGetValues(searcher, operand, terminalClause, messages, true);
return messages;
}
#Override
public FunctionOperand sanitiseOperand(User paramUser, #NotNull FunctionOperand operand)
{
// prevent the user from saving this as a filter, since the results are presumed to be dynamic
return new FunctionOperand(operand.getName(), Arrays.asList(""));
}
private boolean wasSanitised(List<String> args)
{
return (args.size() == 0 || args.get(JQL_FUNCTION_MAGIC_KEY_INDEX).isEmpty());
}
}

JSF 2.X UI Component use of ValueExpression

I am trying to build a simple component to understand how and why JSF 2.X works the way it does. I have been using the newer annotations and have been trying to piece together a clear example.
So I have built my component and deployed it in a xhtml file as follows:
<kal:day value="K.Day" title="Kalendar" model="#{kalendarDay}"/>
The within the UIComponent I do the following:
ValueExpression ve = getValueExpression("model");
if (ve != null)
{
System.out.println("model expression "+ve.getExpressionString());
model = (KalendarDay) ve.getValue(getFacesContext().getELContext());
System.out.println("model "+model);
}
The expression "#{kalendarDay}" is correctly displayed indicating that the value has been successfully transmitted between the page and the component. However the evaluation of the expression results in "null".
This seems to indicate that the backing bean is unavailable at this point, although the page correctly validates and deploys. So I am 95% certain that the bean is there at run time.
So perhaps this is a phase thing? Should I be evaluating this in the decode of the renderer and setting the value in the attributes map? I am still a little confused about the combination of actual values and value expressions.
So my question is where should I fetch and evaluate the valueExpression for model and should I store the result of the evaluation in the UIComponent or should I simply evaluate it every time?
SSCCE files below I think these are the only required files to demonstrate the problem
Bean Interface -----
/**
*
*/
package com.istana.kalendar.fixture;
import java.util.Date;
/**
* #author User
*
*/
public interface KalendarDay
{
public Date getDate();
}
Bean Implementation ---
/**
*
*/
package com.istana.kalendar.session.wui;
import java.util.Calendar;
import java.util.Date;
import javax.ejb.Stateful;
import javax.inject.Named;
import com.istana.kalendar.fixture.KalendarDay;
/**
* #author User
*
*/
#Named ("kalendarDay")
#Stateful
public class KalKalendarDay
implements KalendarDay
{
private Calendar m_date = Calendar.getInstance();
/* (non-Javadoc)
* #see com.istana.kalendar.fixture.KalendarDay#getDate()
*/
#Override
public Date getDate()
{
return m_date.getTime();
}
}
UIComponent ---
/**
*
*/
package com.istana.kalendar.fixture.jsf;
import javax.el.ValueExpression;
import javax.faces.component.FacesComponent;
import javax.faces.component.UIOutput;
import com.istana.kalendar.fixture.KalendarDay;
/**
* #author User
*
*/
#FacesComponent (value=UIDay.COMPONENT_TYPE)
public class UIDay extends UIOutput
{
static final
public String COMPONENT_TYPE = "com.istana.kalendar.fixture.jsf.Day";
static final
public String COMPONENT_FAMILY = "com.istana.kalendar.fixture.jsf.Day";
private KalendarDay m_model;
private String m_title;
#Override
public String getRendererType()
{
return UIDayRenderer.RENDERER_TYPE;
}
#Override
public String getFamily()
{
return COMPONENT_FAMILY;
}
public KalendarDay getModel()
{
KalendarDay model = (KalendarDay) getStateHelper().eval("model");
System.out.println("model "+model);
return model;
}
public void setModel(KalendarDay model)
{
getStateHelper().put("model",model);
}
public String getTitle()
{
return (String) getStateHelper().eval("title");
}
public void setTitle(String title)
{
getStateHelper().put("title",title);
}
}
UIComponentRenderer ---
/**
*
*/
package com.istana.kalendar.fixture.jsf;
import java.io.IOException;
import javax.el.ValueExpression;
import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;
import javax.faces.context.ResponseWriter;
import javax.faces.render.FacesRenderer;
import javax.faces.render.Renderer;
import com.istana.kalendar.fixture.KalendarDay;
/**
* #author User
*
*/
#FacesRenderer (componentFamily = UIDay.COMPONENT_FAMILY
,rendererType = UIDayRenderer.RENDERER_TYPE
)
public class UIDayRenderer extends Renderer
{
static final
public String RENDERER_TYPE = "com.istana.kalendar.fixture.jsf.DayRenderer";
#Override
public void encodeBegin (FacesContext context,UIComponent component)
throws IOException
{
UIDay uic = (UIDay) component;
ResponseWriter writer = context.getResponseWriter();
writer.startElement("p", uic);
/*
* This is the call that triggers the println
*/
writer.write("Day - "+uic.getModel().getDate());
}
#Override
public void encodeEnd (FacesContext context,UIComponent component)
throws IOException
{
ResponseWriter writer = context.getResponseWriter();
writer.endElement("p");
writer.flush();
}
}
kalendar.taglib.xml ---
<facelet-taglib
id="kalendar"
xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-facelettaglibrary_2_0.xsd"
version="2.0"
>
<namespace>http://istana.com/kalendar</namespace>
<tag>
<tag-name>day</tag-name>
<component>
<component-type>com.istana.kalendar.fixture.jsf.Day</component-type>
</component>
</tag>
</facelet-taglib>
I'm not sure why it's null, but the symptoms indicate that the #{kalendarDay} is been specified during view render time while you're trying to evaluate it during the view build time.
So perhaps this is a phase thing? Should I be evaluating this in the decode of the renderer and setting the value in the attributes map? I am still a little confused about the combination of actual values and value expressions.
You should use the encodeXxx() methods of the component or the associated renderer (if any) to generate HTML based on the component's attributes/properties.
You should use the decode() method of the component or the associated renderer (if any) to set component's attributes/properties based on HTTP request parameters which are been sent along with a HTML form submit.
So my question is where should I fetch and evaluate the valueExpression for model and should I store the result of the evaluation in the UIComponent or should I simply evaluate it every time?
Since JSF 2.x it's recommended to explicitly specify a getter and setter for component attributes which in turn delegates to UIComponent#getStateHelper().
public String getValue() {
return (String) getStateHelper().eval("value");
}
public void setValue(String value) {
getStateHelper().put("value", value);
}
public String getTitle() {
return (String) getStateHelper().eval("title");
}
public void setTitle(String title) {
getStateHelper().put("title", title);
}
public Object getModel() {
return getStateHelper().eval("model");
}
public void setModel(Object model) {
getStateHelper().put("model", model);
}
That's all you basically need (note that the getter and setter must exactly match the attribute name as per Javabeans specification). Then in the encodeXxx() method(s) just call getModel() to get (and evaluate) the value of the model attribute.

Resources