grails limited table creation - grails

I'd like to use the Grails feature for creating/updating database tables on a limited basis. Specifically, I'd like Grails to manage some tables, but not all.
Is there a way to limit the tables managed by Grails or is it an all or nothing proposition?

In general it's all or nothing since Grails uses Hibernate's HBM2DDL functionality. But you can intercept the process using a custom configuration subclass. Here's an example that extends GrailsAnnotationConfiguration and overrides all three SQL generation methods:
package com.yourcompany.yourapp;
import java.util.ArrayList;
import java.util.List;
import org.codehaus.groovy.grails.orm.hibernate.cfg.GrailsAnnotationConfiguration;
import org.hibernate.HibernateException;
import org.hibernate.dialect.Dialect;
import org.hibernate.tool.hbm2ddl.DatabaseMetadata;
public class MyConfiguration extends GrailsAnnotationConfiguration {
private static final String[] IGNORE_NAMES = { "foo", "bar" };
#Override
public String[] generateDropSchemaScript(Dialect dialect) throws HibernateException {
return prune(super.generateDropSchemaScript(dialect));
}
#Override
public String[] generateSchemaCreationScript(Dialect dialect) throws HibernateException {
return prune(super.generateSchemaCreationScript(dialect));
}
#Override
public String[] generateSchemaUpdateScript(Dialect dialect, DatabaseMetadata databaseMetadata)
throws HibernateException {
return prune(super.generateSchemaUpdateScript(dialect, databaseMetadata));
}
private String[] prune(String[] script) {
List<String> pruned = new ArrayList<String>();
for (String command : script) {
for (String ignoreName : IGNORE_NAMES) {
if (!command.toLowerCase().contains(" table " + ignoreName + " ")) {
pruned.add(command);
}
}
}
return pruned.toArray(new String[pruned.size()]);
}
}
You don't need to override all three, e.g. you could let creates go through but remove updates.
This gets registered in grails-app/conf/DataSource.groovy:
dataSource {
pooled = true
driverClassName = ...
username = ...
password = ...
configClass = com.yourcompany.yourapp.MyConfiguration
}
Note that the class has to be written in Java because of an issue with private methods in the base class clashing with methods that Groovy adds to all groovy classes.

Thanks for the fix Steve. I'm reposting the fix above, but formatted for stackoverflow. When I first tried this I missed the fact that the original if (!command.toLowerCase()...) reversed logic in the fix as if (command.toLowerCase()...), which effectively made it look like dbCreate = "create-drop" in my config/DataSource.groovy was having no effect. I even worked from the HTML source from this page to see intended line breaks of the code, but still missed that vital bit of logic :-( Once I got this entered correctly it worked great.
private String[] prune(String[] script) {
List<String> pruned = new ArrayList<String>();
for (String command : script) {
boolean ignore = false;
for (String ignoreName : IGNORE_NAMES) {
if (command.toLowerCase().contains(" table " + ignoreName.toLowerCase() + " ")) {
ignore = true;
break;
}
}
if (!ignore) {
pruned.add(command);
}
}
return pruned.toArray(new String[pruned.size()]);
}

Related

How to configure Micronaut and Micrometer to write ILP directly to InfluxDB?

I have a Micronaut application that uses Micrometer to report metrics to InfluxDB with the micronaut-micrometer project. Currently it is using the Statsd Registry provided via the io.micronaut.configuration:micronaut-micrometer-registry-statsd dependency.
I would like to instead output metrics in Influx Line Protocol (ILP), but the micronaut-micrometer project does not offer an Influx Registry currently. I tried to work around this by importing the io.micrometer:micrometer-registry-influx dependency and configuring an InfluxMeterRegistry manually like this:
#Factory
public class MyMetricRegistryConfigurer implements MeterRegistryConfigurer {
#Bean
#Primary
#Singleton
public MeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
#Override
public boolean supports(MeterRegistry meterRegistry) {
return meterRegistry instanceof InfluxMeterRegistry;
}
}
When the application runs, the metrics are exposed on my /metrics endpoint as I would expect, but nothing gets written to InfluxDB. I confirmed that my local InfluxDB accepts metrics at the expected localhost:8086/write?db=metrics endpoint using curl. Can anyone give me some pointers to get this working? I'm wondering if I need to manually define a reporter somewhere...
After playing around for a bit, I got this working with the following code:
#Factory
public class InfluxMeterRegistryFactory {
#Bean
#Singleton
#Requires(property = MeterRegistryFactory.MICRONAUT_METRICS_ENABLED, value =
StringUtils.TRUE, defaultValue = StringUtils.TRUE)
#Requires(beans = CompositeMeterRegistry.class)
public InfluxMeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
}
I also noticed that an InfluxMeterRegistry will be available out of the box in the future for micronaut-micrometer as of v1.2.0.

How to get JIRA like change history using hibernate envers audit log?

I am trying to show JIRA like change history on UI. I am using Spring Data JPA and I have configured audit trail with Envers (v5.3.7). I can get list of all revisions using AuditQuery, for a particular entity by its primary key value.
Is there an easy way to calculate "delta" across revisions and identify properties that were changed? (With old and new value)
I have added #Audited(withModifiedFlag = true) annotation to my entity class. It adds one more column in the <entity>_aud table for each property indicating if the property was changed or not. I am trying to figure out, how to make use of these additional columns.
If you need something like JIRA you have to build it by your own.
I would suggest that you use Hibernate Interceptors:
http://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#events
As you can see in the following example you get the current and the previous state and can then create the delta and store it in your own changelog table:
public static class LoggingInterceptor extends EmptyInterceptor {
#Override
public boolean onFlushDirty(
Object entity,
Serializable id,
Object[] currentState,
Object[] previousState,
String[] propertyNames,
Type[] types) {
LOGGER.debugv( "Entity {0}#{1} changed from {2} to {3}",
entity.getClass().getSimpleName(),
id,
Arrays.toString( previousState ),
Arrays.toString( currentState )
);
return super.onFlushDirty( entity, id, currentState,
previousState, propertyNames, types
);
}
}
Here is my code
import javax.annotation.PostConstruct;
import javax.persistence.EntityManagerFactory;
import org.hibernate.event.service.spi.EventListenerRegistry;
import org.hibernate.event.spi.EventType;
import org.hibernate.event.spi.PreUpdateEvent;
import org.hibernate.event.spi.PreUpdateEventListener;
import org.hibernate.internal.SessionFactoryImpl;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Component
public class AuditListener implements PreUpdateEventListener {
#Autowired
private EntityManagerFactory entityManagerFactory;
#PostConstruct
private void init() {
SessionFactoryImpl sessionFactory = entityManagerFactory.unwrap(SessionFactoryImpl.class);
EventListenerRegistry registry = sessionFactory.getServiceRegistry().getService(EventListenerRegistry.class);
// You can also add listener for a specific entity-type instead of event-group
// In my case I needed global event listener
registry.getEventListenerGroup(EventType.PRE_UPDATE).appendListener(this);
}
#Override
public boolean onPreUpdate(PreUpdateEvent event) {
String[] propertyNames = event.getPersister().getPropertyNames();
Object[] oldValues = event.getOldState();
Object[] newValues = event.getState();
for (int index = 0; index < propertyNames.length; index++) {
String propertyName = propertyNames[index];
Object oldValue = oldValues[index];
Object newValue = newValues[index];
// This is just sample code
boolean changed = oldValue != newValue;
if (changed) {
System.out.println("Audit log -> Property: " + propertyName + ", Old value: " + oldValue + ", New value: " + newValue);
// Actual code that persists audit log
...
...
}
}
return false;
}
}

Does Apache Beam support custom file names for its output?

While in a distributed processing environment it is common to use "part" file names such as "part-000", is it possible to write an extension of some sort to rename the individual output file names (such as a per window file name) of Apache Beam?
To do this, one might have to be able to assign a name for a window or infer a file name based on the window's content. I would like to know if such an approach is possible.
As to whether the solution should be streaming or batch, a streaming mode example is preferable
Yes as suggested by jkff you can achieve this using TextIO.write.to(FilenamePolicy).
Examples are below:
If you want to write output to particular local file you can use:
lines.apply(TextIO.write().to("/path/to/file.txt"));
Below is the simple way to write the output using the prefix, link. This example is for google storage, instead of this you can use local/s3 paths.
public class MinimalWordCountJava8 {
public static void main(String[] args) {
PipelineOptions options = PipelineOptionsFactory.create();
// In order to run your pipeline, you need to make following runner specific changes:
//
// CHANGE 1/3: Select a Beam runner, such as BlockingDataflowRunner
// or FlinkRunner.
// CHANGE 2/3: Specify runner-required options.
// For BlockingDataflowRunner, set project and temp location as follows:
// DataflowPipelineOptions dataflowOptions = options.as(DataflowPipelineOptions.class);
// dataflowOptions.setRunner(BlockingDataflowRunner.class);
// dataflowOptions.setProject("SET_YOUR_PROJECT_ID_HERE");
// dataflowOptions.setTempLocation("gs://SET_YOUR_BUCKET_NAME_HERE/AND_TEMP_DIRECTORY");
// For FlinkRunner, set the runner as follows. See {#code FlinkPipelineOptions}
// for more details.
// options.as(FlinkPipelineOptions.class)
// .setRunner(FlinkRunner.class);
Pipeline p = Pipeline.create(options);
p.apply(TextIO.read().from("gs://apache-beam-samples/shakespeare/*"))
.apply(FlatMapElements
.into(TypeDescriptors.strings())
.via((String word) -> Arrays.asList(word.split("[^\\p{L}]+"))))
.apply(Filter.by((String word) -> !word.isEmpty()))
.apply(Count.<String>perElement())
.apply(MapElements
.into(TypeDescriptors.strings())
.via((KV<String, Long> wordCount) -> wordCount.getKey() + ": " + wordCount.getValue()))
// CHANGE 3/3: The Google Cloud Storage path is required for outputting the results to.
.apply(TextIO.write().to("gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX"));
p.run().waitUntilFinish();
}
}
This example code will give you more control on writing the output:
/**
* A {#link FilenamePolicy} produces a base file name for a write based on metadata about the data
* being written. This always includes the shard number and the total number of shards. For
* windowed writes, it also includes the window and pane index (a sequence number assigned to each
* trigger firing).
*/
protected static class PerWindowFiles extends FilenamePolicy {
private final ResourceId prefix;
public PerWindowFiles(ResourceId prefix) {
this.prefix = prefix;
}
public String filenamePrefixForWindow(IntervalWindow window) {
String filePrefix = prefix.isDirectory() ? "" : prefix.getFilename();
return String.format(
"%s-%s-%s", filePrefix, formatter.print(window.start()), formatter.print(window.end()));
}
#Override
public ResourceId windowedFilename(int shardNumber,
int numShards,
BoundedWindow window,
PaneInfo paneInfo,
OutputFileHints outputFileHints) {
IntervalWindow intervalWindow = (IntervalWindow) window;
String filename =
String.format(
"%s-%s-of-%s%s",
filenamePrefixForWindow(intervalWindow),
shardNumber,
numShards,
outputFileHints.getSuggestedFilenameSuffix());
return prefix.getCurrentDirectory().resolve(filename, StandardResolveOptions.RESOLVE_FILE);
}
#Override
public ResourceId unwindowedFilename(
int shardNumber, int numShards, OutputFileHints outputFileHints) {
throw new UnsupportedOperationException("Unsupported.");
}
}
#Override
public PDone expand(PCollection<InputT> teamAndScore) {
if (windowed) {
teamAndScore
.apply("ConvertToRow", ParDo.of(new BuildRowFn()))
.apply(new WriteToText.WriteOneFilePerWindow(filenamePrefix));
} else {
teamAndScore
.apply("ConvertToRow", ParDo.of(new BuildRowFn()))
.apply(TextIO.write().to(filenamePrefix));
}
return PDone.in(teamAndScore.getPipeline());
}
Yes. Per documentation of TextIO:
If you want better control over how filenames are generated than the default policy allows, a custom FilenamePolicy can also be set using TextIO.Write.to(FilenamePolicy)
This is perfectly valid example with beam 2.1.0. You can call on your data (PCollection e.g)
import org.apache.beam.sdk.io.FileBasedSink.FilenamePolicy;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.fs.ResolveOptions.StandardResolveOptions;
import org.apache.beam.sdk.io.fs.ResourceId;
import org.apache.beam.sdk.transforms.display.DisplayData;
#SuppressWarnings("serial")
public class FilePolicyExample {
public static void main(String[] args) {
FilenamePolicy policy = new WindowedFilenamePolicy("somePrefix");
//data
data.apply(TextIO.write().to("your_DIRECTORY")
.withFilenamePolicy(policy)
.withWindowedWrites()
.withNumShards(4));
}
private static class WindowedFilenamePolicy extends FilenamePolicy {
final String outputFilePrefix;
WindowedFilenamePolicy(String outputFilePrefix) {
this.outputFilePrefix = outputFilePrefix;
}
#Override
public ResourceId windowedFilename(
ResourceId outputDirectory, WindowedContext input, String extension) {
String filename = String.format(
"%s-%s-%s-of-%s-pane-%s%s%s",
outputFilePrefix,
input.getWindow(),
input.getShardNumber(),
input.getNumShards() - 1,
input.getPaneInfo().getIndex(),
input.getPaneInfo().isLast() ? "-final" : "",
extension);
return outputDirectory.resolve(filename, StandardResolveOptions.RESOLVE_FILE);
}
#Override
public ResourceId unwindowedFilename(
ResourceId outputDirectory, Context input, String extension) {
throw new UnsupportedOperationException("Expecting windowed outputs only");
}
#Override
public void populateDisplayData(DisplayData.Builder builder) {
builder.add(DisplayData.item("fileNamePrefix", outputFilePrefix)
.withLabel("File Name Prefix"));
}
}
}
You can check https://beam.apache.org/releases/javadoc/2.3.0/org/apache/beam/sdk/io/FileIO.html for more information, you should search "File naming" in "Writing files".
.apply(
FileIO.<RootElement>write()
.via(XmlIO
.sink(RootElement.class)
.withRootElement(ROOT_XML_ELEMENT)
.withCharset(StandardCharsets.UTF_8))
.to(FILE_PATH)
.withNaming((window, pane, numShards, shardIndex, compression) -> NEW_FILE_NAME)

DymanicDestinations in Apache Beam

I have a PCollection [String] say "X" that I need to dump in a BigQuery table.
The table destination and the schema for it is in a PCollection[TableRow] say "Y".
How to accomplish this in the simplest manner?
I tried extracting the table and schema from "Y" and saving it in static global variables (tableName and schema respectively). But somehow oddly the BigQueryIO.writeTableRows() always gets the value of the variable tableName as null. But it gets the schema. I tried logging the values of those variables and I can see the values are there for both.
Here is my pipeline code:
static String tableName;
static TableSchema schema;
PCollection<String> read = p.apply("Read from input file",
TextIO.read().from(options.getInputFile()));
PCollection<TableRow> tableRows = p.apply(
BigQueryIO.read().fromQuery(NestedValueProvider.of(
options.getfilename(),
new SerializableFunction<String, String>() {
#Override
public String apply(String filename) {
return "SELECT table,schema FROM `BigqueryTest.configuration` WHERE file='" + filename +"'";
}
})).usingStandardSql().withoutValidation());
final PCollectionView<List<String>> dataView = read.apply(View.asList());
tableRows.apply("Convert data read from file to TableRow",
ParDo.of(new DoFn<TableRow,TableRow>(){
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
String[] schemas = c.element().get("schema").toString().split(",");
List<TableFieldSchema> fields = new ArrayList<>();
for(int i=0;i<schemas.length;i++) {
fields.add(new TableFieldSchema()
.setName(schemas[i].split(":")[0]).setType(schemas[i].split(":")[1]));
}
schema = new TableSchema().setFields(fields);
//My code to convert data to TableRow format.
}}).withSideInputs(dataView));
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
Everything works fine. Only BigQueryIO.write operation fails and I get the error TableId is null.
I also tried using SerializableFunction and returning the value from there but i still get null.
Here is the code that I tried for it:
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to(new GetTable(tableName))
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
public static class GetTable implements SerializableFunction<String,String> {
String table;
public GetTable() {
this.table = tableName;
}
#Override
public String apply(String arg0) {
return "ProjectId:DatasetId."+table;
}
}
I also tried using DynamicDestinations but I get an error saying schema is not provided. Honestly I'm new to the concept of DynamicDestinations and I'm not sure that I'm doing it correctly.
Here is the code that I tried for it:
tableRows2.apply(BigQueryIO.writeTableRows()
.to(new DynamicDestinations<TableRow, TableRow>() {
private static final long serialVersionUID = 1L;
#Override
public TableDestination getTable(TableRow dest) {
List<TableRow> list = sideInput(bqDataView); //bqDataView contains table and schema
String table = list.get(0).get("table").toString();
String tableSpec = "ProjectId:DatasetId."+table;
String tableDescription = "";
return new TableDestination(tableSpec, tableDescription);
}
public String getSideInputs(PCollectionView<List<TableRow>> bqDataView) {
return null;
}
#Override
public TableSchema getSchema(TableRow destination) {
return schema; //schema is getting added from the global variable
}
#Override
public TableRow getDestination(ValueInSingleWindow<TableRow> element) {
return null;
}
}.getSideInputs(bqDataView)));
Please let me know what I'm doing wrong and which path I should take.
Thank You.
Part of the reason your having trouble is because of the two stages of pipeline execution. First the pipeline is constructed on your machine. This is when all of the applications of PTransforms occur. In your first example, this is when the following lines are executed:
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
The code within a ParDo however runs when your pipeline executes, and it does so on many machines. So the following code runs much later than the pipeline construction:
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
...
schema = new TableSchema().setFields(fields);
...
}
This means that neither the tableName nor the schema fields will be set at when the BigQueryIO sink is created.
Your idea to use DynamicDestinations is correct, but you need to move the code to actually generate the schema the destination into that class, rather than relying on global variables that aren't available on all of the machines.

Displaying Many Jira issues using Issue Navigator

My question is similar to "Displaying Jira issues using Issue Navigator" But my concern is that sometimes my list of issues is quite long and providing the user with a link to the Issue Navigator only works if my link is shorter than the max length of URLs.
Is there another way? Cookies? POST data? Perhaps programmatically creating and sharing a filter on the fly, and returning a link to the Issue Navigator that uses this filter? (But at some point I'd want to delete these filters so I don't have so many lying around.)
I do have the JQL query for getting the same list of issues, but it takes a very (very) long time to run and my code has already done the work of finding out what the result is -- I don't want the user to wait twice for the same thing (once while I'm generating my snazzy graphical servlet view and a second time when they want to see the same results in the Issue Navigator).
The easiest way to implement this is to ensure that the list of issues is cached somewhere within one of your application components. Each separate list of issues should be identified by its own unique ID (the magicKey below) that you define yourself.
You would then write your own JQL function that looks up your pre-calculated list of issues by that magic key, which then converts the list of issues into the format that the Issue Navigator requires.
The buildUriEncodedJqlQuery() method can be used to create such an example JQL query dynamically. For example, if the magic key is 1234, it would yield this JQL query: issue in myJqlFunction(1234).
You'd then feed the user a URL that looks like this:
String url = "/secure/IssueNavigator.jspa?mode=hide&reset=true&jqlQuery=" + MyJqlFunction.buildUriEncodedJqlQuery(magicKey);
The end result is that the user will be placed in the Issue Navigator looking at exactly the list of issues your code has provided.
This code also specifically prevents the user from saving your JQL function as part of a saved filter, since it's assumed that the issue cache in your application will not be permanent. If that's not correct, you will want to empty out the sanitiseOperand part.
In atlassian-plugins.xml:
<jql-function key="myJqlFunction" name="My JQL Function"
class="com.mycompany.MyJqlFunction">
</jql-function>
JQL function:
import com.atlassian.crowd.embedded.api.User;
import com.atlassian.jira.JiraDataType;
import com.atlassian.jira.JiraDataTypes;
import com.atlassian.jira.jql.operand.QueryLiteral;
import com.atlassian.jira.jql.query.QueryCreationContext;
import com.atlassian.jira.plugin.jql.function.ClauseSanitisingJqlFunction;
import com.atlassian.jira.plugin.jql.function.JqlFunction;
import com.atlassian.jira.plugin.jql.function.JqlFunctionModuleDescriptor;
import com.atlassian.jira.util.MessageSet;
import com.atlassian.jira.util.MessageSetImpl;
import com.atlassian.jira.util.NotNull;
import com.atlassian.query.clause.TerminalClause;
import com.atlassian.query.operand.FunctionOperand;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class MyJqlFunction implements JqlFunction, ClauseSanitisingJqlFunction
{
private static final String JQL_FUNCTION_NAME = "myJqlFunctionName";
private static final int JQL_FUNCTION_MIN_ARG_COUNT = 1;
private static final int JQL_FUNCTION_MAGIC_KEY_INDEX = 0;
public MyJqlFunction()
{
// inject your app's other components here
}
#Override
public void init(#NotNull JqlFunctionModuleDescriptor moduleDescriptor)
{
}
#Override
public JiraDataType getDataType()
{
return JiraDataTypes.ISSUE;
}
#Override
public String getFunctionName()
{
return JQL_FUNCTION_NAME;
}
#Override
public int getMinimumNumberOfExpectedArguments()
{
return JQL_FUNCTION_MIN_ARG_COUNT;
}
#Override
public boolean isList()
{
return true;
}
/**
* This function generates a URL-escaped JQL query that corresponds to the supplied magic key.
*
* #param magicKey
* #return
*/
public static String buildUriEncodedJqlQuery(String magicKey)
{
return "issue%20in%20" + JQL_FUNCTION_NAME + "(%22"
+ magicKey + "%22%)";
}
#Override
public List<QueryLiteral> getValues(#NotNull QueryCreationContext queryCreationContext,
#NotNull FunctionOperand operand,
#NotNull TerminalClause terminalClause)
{
User searchUser = queryCreationContext.getUser();
MessageSet messages = new MessageSetImpl();
List<QueryLiteral> values = internalGetValues(searchUser,
operand,
terminalClause,
messages,
!queryCreationContext.isSecurityOverriden());
return values;
}
private List<QueryLiteral> internalGetValues(#NotNull User searchUser,
#NotNull FunctionOperand operand,
#NotNull TerminalClause terminalClause,
#NotNull MessageSet messages,
#NotNull Boolean checkSecurity)
{
List<QueryLiteral> result = new ArrayList<QueryLiteral>();
if (searchUser==null)
{
// handle anon user
}
List<String> args = operand.getArgs();
if (wasSanitised(args))
{
messages.addErrorMessage("this function can't be used as part of a saved filter etc");
return result;
}
if (args.size() < getMinimumNumberOfExpectedArguments())
{
messages.addErrorMessage("too few arguments, etc");
return result;
}
final String magicKey = args.get(JQL_FUNCTION_MAGIC_KEY_INDEX);
// You need to implement this part yourself! This is where you use the supplied
// magicKey to fetch a list of issues from your own internal data source.
List<String> myIssueKeys = myCache.get(magicKey);
for (String id : myIssueKeys)
{
result.add(new QueryLiteral(operand, id));
}
return result;
}
#Override
public MessageSet validate(User searcher, #NotNull FunctionOperand operand, #NotNull TerminalClause terminalClause)
{
MessageSet messages = new MessageSetImpl();
internalGetValues(searcher, operand, terminalClause, messages, true);
return messages;
}
#Override
public FunctionOperand sanitiseOperand(User paramUser, #NotNull FunctionOperand operand)
{
// prevent the user from saving this as a filter, since the results are presumed to be dynamic
return new FunctionOperand(operand.getName(), Arrays.asList(""));
}
private boolean wasSanitised(List<String> args)
{
return (args.size() == 0 || args.get(JQL_FUNCTION_MAGIC_KEY_INDEX).isEmpty());
}
}

Resources