Iterating multiple reasoned literals from the same property - jena

The title may be a bit confusing but basically this is the problem: I am using Jena and a Pellet reasoner to produce property literals from a resource called Patient_Doug. The triple looks like this:
Patient_Doug-> hasSuggestion-> Literal inferred suggestion.
The problem is that the Protege Pellet reasoner comes up with three suggestions for Doug, because Doug is in a pretty bad way in hospital. The Protege reasoner suggests that Doug needs a Hi-Lo bed, an RF ID band and a bed closer to the nurse's station. Unfortunatly, in Jena, I can only get Hi-lo bed to print. Only one of 3 literals.
Here is some of the code.
OntModel model = ModelFactory.createOntologyModel( PelletReasonerFactory.THE_SPEC );
String ns = "http://altervista.org/owl/unit.owl#";
String inputFile = "c:\\jena\\acuity.owl";
InputStream in = FileManager.get().open(inputFile);
if (in == null) {
throw new IllegalArgumentException("File: " + inputFile + " not found");
}
model.read(in,"");
model.prepare();
//inf and reasoner wont run unless i use hp libraries!
//asserted data properties
Individual ind = model.getIndividual(ns+"Patient_Doug");
OntProperty abcValue = model.getOntProperty("http://example.org/hasABCValue");
//inferred data properties
OntProperty suggestion = model.getOntProperty(ns+"hasSuggestion");
//print asserted data properties
System.out.println("Properties for patient "+ind.getLocalName().toString());
System.out.println( abcValue.getLocalName()+"= "+ind.getPropertyValue(abcValue).asLiteral().getInt());
//print inferenced data properties
StmtIterator it = ind.listProperties(suggestion);
//this iterator only prints one suggestion in an infinite loop
while (it.hasNext()) {
System.out.println("A posible suggestion= "+ind.getPropertyValue(suggestion).asLiteral().getString());
}
}
The code works fine but the iterator at the end only prints only one subggestion in an infinite loop.
I would be grateful for any suggestions.
Thanks.

This code works to iterate and print the many inferred hasSuggestions. The hasSuggestion SWRL rules are in the OWL ontology
OntModel model = ModelFactory.createOntologyModel( PelletReasonerFactory.THE_SPEC );
String ns = "http://altervista.org/owl/unit.owl#";
String inputFile = "c:\\jena\\acuity.owl";
InputStream in = FileManager.get().open(inputFile);
if (in == null) {
throw new IllegalArgumentException("File: " + inputFile + " not found");
}
model.read(in,"");
model.prepare();
//inf and reasoner wont run unless i use hp libraries!
//asserted data properties
Individual ind = model.getIndividual(ns+"Patient_Doug");
OntProperty abcValue = model.getOntProperty("http://example.org/hasABCValue");
//inferred data properties
OntProperty suggestion = model.getOntProperty(ns+"hasSuggestion");
//print asserted data properties
System.out.println("Properties for patient "+ind.getLocalName().toString());
System.out.println( abcValue.getLocalName()+"= "+ind.getPropertyValue(abcValue).asLiteral().getInt());
for (StmtIterator j = ind.listProperties(suggestion); j.hasNext(); ) {
Statement s = j.next();
//System.out.println( " " + s.getPredicate().getLocalName() + " -> " );
System.out.println( "A possible suggestion... " + s.getLiteral().getLexicalForm());
}

Related

How to find the number of documents (and fraction) per topic using LDA?

I am trying to extract topic from 7 millons of Twitter data. I have assumed each tweet as a document. So, I stored all tweets in a file where each line (or tweet) treated as a document. I used this file as a input file for Mallet api.
public static void LDAModel(int numofK,int numbofIteration,int numberofThread,String outputDir,InstanceList instances) throws Exception
{
// Create a model with 100 topics, alpha_t = 0.01, beta_w = 0.01
// Note that the first parameter is passed as the sum over topics, while
// the second is the parameter for a single dimension of the Dirichlet prior.
int numTopics = numofK;
ParallelTopicModel model = new ParallelTopicModel(numTopics, 1.0, 0.01);
model.addInstances(instances);
// Use two parallel samplers, which each look at one half the corpus and combine
// statistics after every iteration.
model.setNumThreads(numberofThread);
// Run the model for 50 iterations and stop (this is for testing only,
// for real applications, use 1000 to 2000 iterations)
model.setNumIterations(numbofIteration);
model.estimate();
// Show the words and topics in the first instance
// The data alphabet maps word IDs to strings
Alphabet dataAlphabet = instances.getDataAlphabet();
FeatureSequence tokens = (FeatureSequence) model.getData().get(0).instance.getData();
LabelSequence topics = model.getData().get(0).topicSequence;
Formatter out = new Formatter(new StringBuilder(), Locale.US);
for (int position = 0; position < tokens.getLength(); position++) {
// out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
}
System.out.println(out);
// Estimate the topic distribution of the first instance,
// given the current Gibbs state.
double[] topicDistribution = model.getTopicProbabilities(0);
// Get an array of sorted sets of word ID/count pairs
ArrayList<TreeSet<IDSorter>> topicSortedWords = model.getSortedWords();
// Show top 10 words in topics with proportions for the first document
String topicsoutput="";
for (int topic = 0; topic < numTopics; topic++) {
Iterator<IDSorter> iterator = topicSortedWords.get(topic).iterator();
out = new Formatter(new StringBuilder(), Locale.US);
out.format("%d\t%.3f\t", topic, topicDistribution[topic]);
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
out.format("%s (%.0f) ", dataAlphabet.lookupObject(idCountPair.getID()), idCountPair.getWeight());
//out.format("%s ", dataAlphabet.lookupObject(idCountPair.getID()));
rank++;
}
System.out.println(out);
}
// Create a new instance with high probability of topic 0
StringBuilder topicZeroText = new StringBuilder();
Iterator<IDSorter> iterator = topicSortedWords.get(0).iterator();
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
topicZeroText.append(dataAlphabet.lookupObject(idCountPair.getID()) + " ");
rank++;
}
// Create a new instance named "test instance" with empty target and source fields.
InstanceList testing = new InstanceList(instances.getPipe());
testing.addThruPipe(new Instance(topicZeroText.toString(), null, "test instance", null));
TopicInferencer inferencer = model.getInferencer();
double[] testProbabilities = inferencer.getSampledDistribution(testing.get(0), 10, 1, 5);
System.out.println("0\t" + testProbabilities[0]);
File pathDir = new File(outputDir + File.separator+ "NumofTopics"+numTopics); //FIXME replace all strings with constants
pathDir.mkdir();
String DirPath = pathDir.getPath();
String stateFile = DirPath+File.separator+"output_state.gz";
String outputDocTopicsFile = DirPath+File.separator+"output_doc_topics.txt";
String topicKeysFile = DirPath+File.separator+"output_topic_keys";
PrintWriter writer=null;
String topicKeysFile_fromProgram = DirPath+File.separator+"output_topic";
try {
writer = new PrintWriter(topicKeysFile_fromProgram, "UTF-8");
writer.print(topicsoutput);
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
model.printTopWords(new File(topicKeysFile), 11, false);
model.printDocumentTopics(new File (outputDocTopicsFile));
model.printState(new File (stateFile));
}
public static void main(String[] args) throws Exception{
// Begin by importing documents from text to feature sequences
ArrayList<Pipe> pipeList = new ArrayList<Pipe>();
// Pipes: lowercase, tokenize, remove stopwords, map to features
pipeList.add( new CharSequenceLowercase() );
pipeList.add( new CharSequence2TokenSequence(Pattern.compile("\\p{L}[\\p{L}\\p{P}]+\\p{L}")) );
pipeList.add( new TokenSequenceRemoveStopwords(new File("H:\\Data\\stoplists\\en.txt"), "UTF-8", false, false, false) );
pipeList.add( new TokenSequence2FeatureSequence() );
InstanceList instances = new InstanceList (new SerialPipes(pipeList));
Reader fileReader = new InputStreamReader(new FileInputStream(new File("E:\\Thesis Data\\DataForLDA\\freshnewData\\cleanTweets.txt")), "UTF-8");
instances.addThruPipe(new CsvIterator (fileReader, Pattern.compile("^(\\S*)[\\s,]*(\\S*)[\\s,]*(.*)$"),
3, 2, 1)); // data, label, name fields
int numberofTopic=5;
int numberofIteration=50;
int numberofThread=6;
String outputDir="J:\\Topics\\";
//int numberofTopic=5;
LDAModel(numberofTopic,numberofIteration,numberofThread,outputDir,instances);
TimeUnit.SECONDS.sleep(30);
numberofTopic=10; }
I have got three files from the above program.
1. state file
2. topic proportion file
3. key topic list
I would like to find out the number of documents allocated per topic.
For example I got the following output from key topic list file
0.004 obama (5471) canada (5283) woman (5152) vote (4879) police(3965)
where first column means topic serial number, second column means topic weight, third column means words under this topic (number of words)
Here, I got number of words under this topic but I would also like to show the number of documents where I got this topic. It would be helpful to show this output as a separate file like this. For example,
Topic 1: doc1(80%) doc2(70%) .......
Could anyone please give some idea or any source code for this?
Thanks.
The information you are looking for is contained in the file "2. topic proportion" you mentioned. Note that every document contains each topic with some percentage (although the percentages may be large for one topic and extremly small for others). You will have to decide what you want to extract from the file: The dominant topic (it is in column 3); The dominant topic, but only when the percentage is at least 50% (sometimes, two topics have almost the same percentage) ...

parseDeftemplate in Jess application. Can't provide the JessTokenStream

I'm implementing a methond in my application that uses the Jessp parser class in order to open a file and getting the deftemplates and deffacts inside of it. The problem is that when trying to obtain the result into a object variable, it asks on the constructor for a JessTokenStream. I tried to pass a JessToken, but then it complains about the type, that it should be e8. Searched through the Jess documentation but didn't found an explanation for the arguments, only the syntax of the constructor.
Anyone can help?.
Thanks in advance!!!
The class JessTokenStream is not public, so you can't actually call those parseXXX() methods. They are public for historical reasons but aren't actually usable by clients. They should actually be removed from the public interface.
Instead, use the two-argument form of parseExpression(), and then test the returned object to determine its type. Then you can do what you want with the returned object:
Rete engine = ...
Jesp jesp = ...
Object o = jesp.parseExpression(engine.getGlobalContext(), false);
if (o instanceof Deffacts) {
Deffacts d = (Deffacts) o;
for (int i = 0; i<d.getNFacts(); ++i) {
Fact f = d.getFact(i);
Deftemplate t = f.getDeftemplate();
System.out.println("Fact name is " + f.getName();
System.out.println("Fact name is " + f.getName();
for (String name: t.getSlotNames())
System.out.println("Slot " + name + " contains " + f.getSlotValue(name));
}
}

SQL CLR User Defined Function (C#) adds null character (\0) in between every existing character in String being returned

This one has kept me stumped for a couple of days now.
It's my first dabble with CLR & UDF ...
I have created a user defined function that takes a multiline String as input, scans it and replaces a certain line in the string with an alternative if found. If it is not found, it simply appends the desired line at the end. (See code)
The problem, it seems, comes when the final String (or Stringbuilder) is converted to an SqlString or SqlChars. The converted, returned String always contains the Nul character as every second character (viewing via console output, they are displayed as spaces).
I'm probably missing something fundamental on UDF and/or CLR.
Please Help!!
Code (I leave in the commented Stringbuilder which was my initial attempt... changed to normal String in a desperate attempt to find the issue):
[Microsoft.SqlServer.Server.SqlFunction]
[return: SqlFacet(MaxSize = -1, IsFixedLength = false)]
//public static SqlString udf_OmaChangeJob(String omaIn, SqlInt32 jobNumber) {
public static SqlChars udf_OmaChangeJob(String omaIn, SqlInt32 jobNumber) {
if (omaIn == null || omaIn.ToString().Length <= 0) return new SqlChars("");
String[] lines = Regex.Split(omaIn.ToString(), "\r\n");
Regex JobTag = new Regex(#"^JOB=.+$");
//StringBuilder buffer = new StringBuilder();
String buffer = String.Empty;
bool matched = false;
foreach (var line in lines) {
if (!JobTag.IsMatch(line))
//buffer.AppendLine(line);
buffer += line + "\r\n";
else {
//buffer.AppendLine("JOB=" + jobNumber);
buffer += ("JOB=" + jobNumber + "\r\n");
matched = true;
}
}
if (!matched) //buffer.AppendLine("JOB=" + jobNumber);
buffer += ("JOB=" + jobNumber) + "\r\n";
//return new SqlString(buffer.ToString().Replace("\0",String.Empty)) + "blablabla";
// buffer = buffer.Replace("\0", "|");
return new SqlChars(buffer + "\r\nTheEnd");
}
I know in my experiences, the omaIn parameter should be of type SqlString and when you go to collect its value/process it, set a local variable:
string omaString = omaIn != SqlString.Null ? omaIn.Value : string.empty;
Then when you return on any code path, to rewrap the string in C#, you'd need to set
return omaString == string.empty ? new SqlString.Null : new SqlString(omaString);
I have had some fun wrestling matches learning the intricate hand-off between local and outbound types, especially with CLR TVFs.
Hope that can help!

ANTLR best way to include meta-data in lexing/parsing (custom objects, kind of annotation)

I plan to include text metadata (like bold, font-size, etc.) in the process of parsing to achieve better recognition.
For instance, I have a given structure, where a word on its own line word/r/n which is bold and sized 24px, is the title for some article. In order to get better recognition results, I want to take the characters as well as the metadata in account. In terms of ANTRL I'm not sure how this could be done best. I'd like to do something like:
Wrap each character of the original text into a custom object with fields for the metadata and pass that to ANTLR.
Preprocess the text and insert at specific places annotations for the metadata which is considered by the grammer.
I really like to take option 1. but I'm not sure which part from ANTLR I need to subclass etc. Do I have to start at the ANTLRInputStream-Object, in order to get a proper stream for a subclassed Lexer to get custom Tokens for a subclassed Parser etc. Is there a more elegant way, especially in querying the tokens while parsing with actions in a {} block ?
If anyone has some hints and/or experiences this would be great!
EDIT:
Here is a more specific simple example: I have a file wich includes the encoding of metadata which I parse forehand. the actual text including newline look like the following:
entryOne
Here is some content one.
entryTwo
Here is some content two.
Where the titlesentryOneand entryTwo are originally font-size of 24px and the content is font-size of 12px (as exemplary given values). Char by char I create a new instance of a custom object encapsulating the character as String and the font-size.
I initialize respective objects for each of the characters with fields of the font-size, e.g for the first letter of entryOne like
MyChar aTitelChar = new MyChar("e", 24);
For the content, like the second line Here is some content one. I create instances of MyChar like:
MyChar aContentChar= new MyChar("H", 12);
All characters of the texts are wrapped in instances of the below MyChar-Class and added to a List<MyChar> in order to produce a new input for ANTLR.
below is the Java Class for the characters:
public class MyChar {
private int fontSizePx;
private String text;
public MyChar(String text, int fontSizePx) {
this.text = text;
this.fontSizePx = fontSizePx;
}
public int getFontSizePx() {
return fontSizePx;
}
public String getText() {
return text;
}
}
I want that my grammar matches the above two entries (or more formatted this way) which in turn consist each of a title and a content which is terminated with a fullstop. This grammar could look like this:
rule: entry+ NEWLINE
;
entry:
title
content
;
title:
letters NEWLINE
;
content:
(letters)+ '.' NEWLINE
;
letters:
LETTERS
;
LETTERS:
('a'..'z' | 'A'..'Z')+
;
WS:
(' ' | '\t' | 'f' ) + {$channel = HIDDEN;};
NEWLINE:'\r'? '\n';
Now, for instance, what I want to do is to find out if it's really a title of an entry by checking the font-size of all letters encompassing the title-token before titel-rule returns. In case the input conforms to the grammar but is actually some kind of mistake (the original metadata-encoded file starts with something that conforms to the title-rule but its actually the content) the author of the grammar could sort that out if he knows that the original font-size for titles is 24 and check this. If one of the letter-tokens doesn't equal to font-size 24 throw an exception/don't return/do smthg. appropriate.
The thing I'm pondering on is where to plug in the List<MyChar> to provide this functionality (to query kinds of metadata while parsing in context of ANTLR). I'm experimenting with ANTLR's Classes but as I'm new to ANTLR I thought probably some of the experienced users can point me in the right direction, like where would be a good insertion points for custom objects? should I start by implenting CharStream and override some methods? Probably there is something which ANTLR provides which I haven't found yet?
Here's one way to accomplish what I think you're going for, using the parser to manage matching input to metadata. Note that I made whitespace significant because it's part of the content and can't be skipped. I also made periods part of content to simplify the example, rather than using them as a marker.
SysEx.g
grammar SysEx;
#header {
import java.util.List;
}
#parser::members {
private List<MyChar> metadata;
private int curpos;
private boolean isTitleInput(String input) {
return isFontSizeInput(input, 24);
}
private boolean isContentInput(String input){
return isFontSizeInput(input, 12);
}
private boolean isFontSizeInput(String input, int fontSize){
List<MyChar> sublist = metadata.subList(curpos, curpos + input.length());
System.out.println(String.format("Testing metadata for input=\%s, font-size=\%d", input, fontSize));
int start = curpos;
//move our metadata pointer forward.
skipInput(input);
for (int i = 0, count = input.length(); i < count; ++i){
MyChar chardata = sublist.get(i);
char c = input.charAt(i);
if (chardata.getText().charAt(0) != c){
//This character doesn't match the metadata (ERROR!)
System.out.println(String.format("Content mismatch at metadata position \%d: metadata=(\%s,\%d); input=\%c", start + i, chardata.getText(), chardata.getFontSizePx(), c));
return false;
} else if (chardata.getFontSizePx() != fontSize){
//The font is wrong.
System.out.println(String.format("Format mismatch at metadata position \%d: metadata=(\%s,\%d); input=\%c", start + i, chardata.getText(), chardata.getFontSizePx(), c));
return false;
}
}
//All characters check out.
return true;
}
private void skipInput(String str){
curpos += str.length();
System.out.println("\t\tMoving metadata pointer ahead by " + str.length() + " to " + curpos);
}
}
rule[List<MyChar> metadata]
#init {
this.metadata = metadata;
}
: entry+ EOF
;
entry
: title content
{System.out.println("Finished reading entry.");}
;
title
: line {isTitleInput($line.text)}? newline {System.out.println("Finished reading title " + $line.text);}
;
content
: line {isContentInput($line.text)}? newline {System.out.println("Finished reading content " + $line.text);}
;
newline
: (NEWLINE{skipInput($NEWLINE.text);})+
;
line returns [String text]
#init {
StringBuilder builder = new StringBuilder();
}
#after {
$text = builder.toString();
}
: (ANY{builder.append($ANY.text);})+
;
NEWLINE:'\r'? '\n';
ANY: .; //whitespace can't be skipped because it's content.
A title is a line that matches the title metadata (size 24 font) followed by one or more newline characters.
A content is a line that matches the content metadata (size 12 font) followed by one or more newline characters. As mentioned above, I removed the check for a period for simplification.
A line is a sequence of characters that does not include newline characters.
A validating semantic predicate (the {...}? after line) is used to validate that the line matches the metadata.
Here is the code I used to test the grammar (minus imports, for brevity):
SysExGrammar.java
public class SysExGrammar {
public static void main(String[] args) throws Exception {
//Create some metadata that matches our input.
List<MyChar> matchingMetadata = new ArrayList<MyChar>();
appendMetadata(matchingMetadata, "entryOne\r\n", 24);
appendMetadata(matchingMetadata, "Here is some content one.\r\n", 12);
appendMetadata(matchingMetadata, "entryTwo\r\n", 24);
appendMetadata(matchingMetadata, "Here is some content two.\r\n", 12);
parseInput(matchingMetadata);
System.out.println("Finished example #1");
//Create some metadata that doesn't match our input (negative test).
List<MyChar> mismatchingMetadata = new ArrayList<MyChar>();
appendMetadata(mismatchingMetadata, "entryOne\r\n", 24);
appendMetadata(mismatchingMetadata, "Here is some content one.\r\n", 12);
appendMetadata(mismatchingMetadata, "entryTwo\r\n", 12); //content font size!
appendMetadata(mismatchingMetadata, "Here is some content two.\r\n", 12);
parseInput(mismatchingMetadata);
System.out.println("Finished example #2");
}
private static void parseInput(List<MyChar> metadata) throws Exception {
//Test setup
InputStream resource = SysExGrammar.class.getResourceAsStream("SysExTest.txt");
CharStream input = new ANTLRInputStream(resource);
resource.close();
SysExLexer lexer = new SysExLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);
SysExParser parser = new SysExParser(tokens);
parser.rule(metadata);
System.out.println("Parsing encountered " + parser.getNumberOfSyntaxErrors() + " syntax errors");
}
private static void appendMetadata(List<MyChar> metadata, String string,
int fontSize) {
for (int i = 0, count = string.length(); i < count; ++i){
metadata.add(new MyChar(string.charAt(i) + "", fontSize));
}
}
}
SysExTest.txt (note this uses Windows newlines (\r\n)
entryOne
Here is some content one.
entryTwo
Here is some content two.
Test output (trimmed; the second example has deliberately-mismatched metadata):
Parsing encountered 0 syntax errors
Finished example #1
Parsing encountered 2 syntax errors
Finished example #2
This solution requires that each MyChar corresponds to a character in the input (including newline characters, although you can remove that limitation if you like -- I would remove it if I didn't already have this answer written up ;) ).
As you can see, it's possible to tie the metadata to the parser and everything works as expected. I hope this helps.

JQL performance - natural sort for custom text field

I am struggling with a JQL query.
We have a custom field called 'Build Reported' which is a text field. It has values like '4.7.323H', '5.1.123L', '3.1.456E', etc.
I need to write a simple query that will give me all issues reported after the user-specified version.
JQL function prototype: searchIssues('Build Integrated', '>', '4.7.323B')
To do this, I am firing a JQL Query that gives me the Build Reported for all the issues, I then iterate through each issue and perform a char-by-char comparison to determine if the Build Reported version of the current issue is greater than the one specified by the user. This seems to take too long to execute since I have to retrieve all the issues from jira database.
Is there a faster way to achieve this? Here is what I have so far:
// Get all the arguments
java.util.List args = operand.getArgs();
CustomField cf = customFieldManager.getCustomFieldObjectByName((String)args.get(0));
Long cfID = cf.getIdAsLong();
String operator = (String)args.get(1);
String userVersion = (String)args.get(2);
String jiraVersion = "";
java.util.List issues;
Iterator issuesIterator;
Issue issue;
issues = getAllIssues(user, interestedInVersion, cfID);
issuesIterator = issues.iterator();
// Iterate over all the issues
while(issuesIterator.hasNext())
{
issue = (Issue)issuesIterator.next();
// Get the Build reported value
jiraVersion = (String)issue.getCustomFieldValue(cf);
if(jiraVersion != null &&
!jiraVersion.equals(""))
{
// Compare user-specified version to the one retrieved from database
if(compareVersions(jiraVersion, userVersion, operator))
{
// Add the issue to the result set
literals.add(new QueryLiteral(operand, issue.getId()));
}
}
}
// cfID is the ID for the custom field Build Reported
private java.util.List getAllIssues(User user, Long cfID) throws SearchException, ParseException
{
JqlQueryBuilder builder = JqlQueryBuilder.newBuilder();
builder.where().project("SDEV").and().customField(cfID).isNotEmpty();
Query query = builder.buildQuery();
SearchResults results = searchService.search(user, query, PagerFilter.getUnlimitedFilter());
return results.getIssues();
}
Please note that I do not have any other filters that I could use for the JQL Query Builder to help me reduce the size of the result set.
I found an alternative to the issue I described in my question. Instead of using JQL, I ended up firing a direct SELECT and this turned out to be way quicker. The function below is a part of the JQL Plugin. Here is the code:
This is what I ended up doing:
public java.util.List getValues(#NotNull QueryCreationContext queryCreationContext, #NotNull FunctionOperand operand, #NotNull TerminalClause terminalClause)
{
try
{
// User
User user = queryCreationContext.getUser();
// Args
java.util.List args = operand.getArgs();
CustomField cf = customFieldManager.getCustomFieldObjectByName((String)args.get(0));
Long cfID = cf.getIdAsLong();
String operator = (String)args.get(1);
String userVersion = (String)args.get(2);
// Locals
java.util.List literals = new java.util.LinkedList();
MutableIssue issue = null;
String issueId = "";
String jiraVersion = "";
// DB
Connection conn = null;
String url = "jdbc:jtds:sqlserver://*****:*****/jiradb";
String driver = "net.sourceforge.jtds.jdbc.Driver";
String userName = "*******";
String password = "*******";
String sqlQuery = null;
Statement statement = null;
ResultSet resultSet = null;
Class.forName(driver).newInstance();
conn = DriverManager.getConnection(url, userName, password);
// Get all the issues that has the custom field set
sqlQuery = " SELECT t2.id AS IssueId, t1.stringvalue AS JiraVersion " + "\n" +
" FROM jiradb.jiraschema.customfieldvalue t1 " + "\n" +
" INNER JOIN jiradb.jiraschema.jiraissue t2 " + "\n" +
" ON t1.issue = t2.id " + "\n" +
" WHERE t1.customfield = " + Long.toString(cfID) + " " + "\n" +
" AND t1.stringvalue IS NOT NULL " + "\n" +
" AND t1.stringvalue != '' " + "\n" +
" AND t2.pkey LIKE 'SDEV-%' ";
// Iterate over the result set
statement = conn.createStatement();
resultSet = statement.executeQuery(sqlQuery);
while (resultSet.next())
{
issueId = resultSet.getString("IssueId");
jiraVersion = resultSet.getString("JiraVersion");
// Compare the version from jira with the user provided version
// This is my own function that does char-by-char comparison
if(compareVersions(jiraVersion, userVersion, operator))
{
// Get the issue object to add to the result
issue = ComponentManager.getInstance().getIssueManager().getIssueObject(Long.parseLong(issueId));
// Add the issue to the result
literals.add(new QueryLiteral(operand, issue.getId()));
}
}
// Return all the matching issues here
return literals;
}
catch(Exception e)
{
// Exception handling
}
return null;
}
And this is how the plugin is used:
issue in searchIssues('Build Reported', '>', '5.1.104');

Resources