I'm trying to figure out if there is a way to do a wildcard registry search using Log Parser 2.2. A sample of what I'm trying to do:
try
{
LogQuery qry = new LogQuery();
RegistryInputFormat registryFormat = new RegistryInputFormat();
string query = #"SELECT Path FROM \HKCU\Software WHERE Value='%keyword%'";
rs = qry.Execute(query, registryFormat);
for (; !rs.atEnd(); rs.moveNext())
listBox1.Items.Add(rs.getRecord().toNativeString(","));
}
finally
{
rs.close();
}
WHERE Value='%keyword%' does not seem to work and is specific to what is entered in within the '' and specifically searches %keyword% versus the percent signs being wildcards.
Okay nevermind, got it figured out:
RegRecordSet rs = null;
try
{
LogQuery qry = new LogQuery();
RegistryInputFormat registryFormat = new RegistryInputFormat();
string query = #"SELECT Path FROM \HKCU\Software WHERE Value LIKE '%keyword%'";
rs = qry.Execute(query, registryFormat);
for (; !rs.atEnd(); rs.moveNext())
listBox1.Items.Add(rs.getRecord().toNativeString(","));
}
finally
{
rs.close();
}
Related
Good Afternoon All,
I am facing an issue and cant figure out what I am doing wrong in my myslq/php while/foreach loop.
Loop seems to be duplicating results.
////////////////////////Check which site have this app///////////////////
$query_t = "SELECT * FROM site WHERE app_id='2'";
$result_t = mysql_query($query_t) or die(mysql_error());
$rows = array();
while($r_t = mysql_fetch_array($result_t))
$rows[] = $r_t;
foreach($rows as $r_t){
$this_areport_site_id = $r_t['site_id'];
//////////////Search for user emails that have access to this app ////////
$query_t4 = "SELECT mail FROM $user_tbl WHERE arep_kitchen='1' AND site_id='$this_areport_site_id' ORDER BY id ASC";
$result_t4 = mysql_query($query_t4);
while ($r_t4 = mysql_fetch_array($result_t4)) {
$areport_kitchen_email .= $r_t4[mail].',';// append comma after each value you append to the string
}
echo 'Here: '.$this_areport_site_id.' - '.$areport_kitchen_email.'<br />';
}
Now it does return me following:
Here: AHROW - person1#email.com,
Here: AHROW - person1#email.com,person2#email.com,
Here: AHALEX - person1#email.com,person2#email.com,person3#email.com,
And I was expecting
Here: AHROW - person1#email.com,person2#email.com,
Here: AHLANG - No Records here
Here: AHALEX - person3#email.com,
I would appreciate suggestion what I am doing wrong there as I am sitting on this whole morning.
You should empty the string $areport_kitchen_email before appending emails to it:
$areport_kitchen_email = ''; // empty emails container string
while ($r_t4 = mysql_fetch_array($result_t4)) {
$areport_kitchen_email .= $r_t4[mail].',';// append comma after each value you append to the string
}
This will avoid duplicates.
I have fixed the issue by doing two loops one inside the other and resigning from comma separated string which seems to be an issue in first post.
Bozzy, thank you for an effort.
$query_t7 = "SELECT * FROM site WHERE app_id='2'";
$result_t7 = mysql_query($query_t7) or die(mysql_error());
while($r_t7 = mysql_fetch_array($result_t7)) {
$this_areport_site_id = $r_t7['site_id'];
$query_t8 = "SELECT * FROM admins WHERE arep_kitchen='1' AND site_id='$this_areport_site_id' ORDER BY id ASC";
$result_t8 = mysql_query($query_t8) or die(mysql_error());
while($r_t8 = mysql_fetch_array($result_t8)) {
$areport_kitchen_email = $r_t8['mail'];
}
}
I read the documentation (https://api.dartlang.org/stable/1.21.1/dart-core/RegExp-class.html) but could not find I was looking for. Either I didnt understand it or I overlooked something.
I am trying to replicate the following in google dart:
var regex = /foo_(\d+)/g,
str = "text foo_123 more text foo_456 foo_789 end text",
match = null;
while (match = regex.exec(str)) {
console.log(match); // matched capture groups
console.log(match.index); // index of where match starts in string
console.log(regex.lastIndex); // index of where match ends in string
}
I also created a jsfiddle: https://jsfiddle.net/h3z88udz/
Does dart have something like regex exec()?
RegExp.allMatches looks like it does what you want.
var regex = new RegExp(r"foo_(\d+)");
var str = "text foo_123 more text foo_456 foo_789 end text";
void main() {
for (var match in regex.allMatches(str)) {
print(match);
print(match.start);
print(match.end);
}
}
https://dartpad.dartlang.org/dd1c136fa49ada4f2ad4ffc0659aab51
As per in this FileHelpers 3.1 example, you can automatically detect a CSV file format using the FileHelpers.Detection.SmartFormatDetector class.
But the example goes no further. How do you use this information to dynamically parse a CSV file? It must have something to do with the DelimitedFileEngine but I cannot see how.
Update:
I figured out a possible way but had to resort to using reflection (which does not feel right). Is there another/better way? Maybe using System.Dynamic? Anyway, here is the code I have so far, it ain't pretty but it works:
// follows on from smart detector example
FileHelpers.Detection.RecordFormatInfo lDetectedFormat = formats[0];
Type lDetectedClass = lDetectedFormat.ClassBuilderAsDelimited.CreateRecordClass();
List<FieldInfo> lFieldInfoList = new List<FieldInfo>(lDetectedFormat.ClassBuilderAsDelimited.FieldCount);
foreach (FileHelpers.Dynamic.DelimitedFieldBuilder lField in lDetectedFormat.ClassBuilderAsDelimited.Fields)
lFieldInfoList.Add(lDetectedClass.GetField(lField.FieldName));
FileHelperAsyncEngine lFileEngine = new FileHelperAsyncEngine(lDetectedClass);
int lRecNo = 0;
lFileEngine.BeginReadFile(cReadingsFile);
try
{
while (true)
{
object lRec = lFileEngine.ReadNext();
if (lRec == null)
break;
Trace.WriteLine("Record " + lRecNo);
lFieldInfoList.ForEach(f => Trace.WriteLine(" " + f.Name + " = " + f.GetValue(lRec)));
lRecNo++;
}
}
finally
{
lFileEngine.Close();
}
As I use the SmartFormatDetector to determine the exact format of the incoming Delimited files you can use following appoach:
private DelimitedClassBuilder GetFormat(string file)
{
var detector = new FileHelpers.Detection.SmartFormatDetector();
var format = detector.DetectFileFormat(file);
return format.First().ClassBuilderAsDelimited;
}
private List<T> ConvertFile2Objects<T>(string file, out DelimitedFileEngine engine)
{
var format = GetSeperator(file); // Get Here your FormatInfo
engine = new DelimitedFileEngine(typeof(T)); //define your DelimitdFileEngine
//set some Properties of the engine with what you need
engine.ErrorMode = ErrorMode.SaveAndContinue; //optional
engine.Options.Delimiter = format.Delimiter;
engine.Options.IgnoreFirstLines = format.IgnoreFirstLines;
engine.Options.IgnoreLastLines = format.IgnoreLastLines;
//process
var ret = engine.ReadFileAsList(file);
this.errorCount = engine.ErrorManager.ErrorCount;
var err = engine.ErrorManager.Errors;
engine.ErrorManager.SaveErrors("errors.out");
//return records do here what you need
return ret.Cast<T>().ToList();
}
This is an approach I use in a project, where I only know that I have to process Delimited files of multiple types.
Attention:
I noticed that with the files I recieved the SmartFormatDetector has a problem with tab delimiter. Maybe this should be considered.
Disclaimer: This code is not perfected but in a usable state. Modification and/or refactoring is adviced.
I'm very new to neo4j. I've read this question (Cypher Query not finding Node) but it does not work. I'm getting the error, the the auto_node_index was not found. Perhaps it is because I'm using the BatchInserter?
For my experiment, I'm using neo4j 1.8.2 and the programming language Java with the embedded database.
I want to put some data to the database using the BatchInserter and the BatchInserterIndex like explained on http://docs.neo4j.org/chunked/milestone/batchinsert.html.
BatchInserter myInserter = BatchInserters.inserter(DB_PATH);
BatchInserterIndexProvider indexProvider =
new LuceneBatchInserterIndexProvider( myInserter );
BatchInserterIndex persons =
indexProvider.nodeIndex( "persons", MapUtil.stringMap( "type", "exact" ) );
persons.setCacheCapacity( "name", 10000 );
First I read the data from a TGF-file, create the nodes and put it to the inserter like this:
properties = MapUtil.map("name", actualNodeName, "birthday", birthdayValue);
long node = myInserter.createNode(properties);
nodes.add(node);
persons.flush();
The insert works fine, but when I want to search a node with Cypher, the result is empty
ExecutionEngine engine = new ExecutionEngine( db );
String query =
"start n=node:persons(name='nameToSearch') "
+ " match n-[:KNOWS]->m "
+ " return n.id, m ";
ExecutionResult result = engine.execute( query );
System.out.println(result);
On the other side, when I'm using the Traverser-class and start the search on the rootnode, I receive the nodes wich are connetced by the node with the name "nameToSearch".
Can anybody explain me, why I can't get the nodes with Cypher!
here is the complete method for the batch insert:
public long batchImport() throws IOException{
String actualLine;
ArrayList<Long> nodes = new ArrayList<Long>();
Map<String,Object> properties = new HashMap<String,Object>();
//delete all nodes and edges in the database
FileUtils.deleteRecursively(new File(DB_PATH ));
BatchInserter myInserter = BatchInserters.inserter(DB_PATH);
BatchInserterIndexProvider indexProvider =
new LuceneBatchInserterIndexProvider( myInserter );
BatchInserterIndex persons =
indexProvider.nodeIndex( "persons", MapUtil.stringMap( "type", "exact" ) );
persons.setCacheCapacity( "name", 10000 );
long execTime = 0;
try{
//Get the file which contains the graph informations
FileReader inputFile = new FileReader(UtilFunctions.searchFile(new File(PATH_OUTPUT_MERGED_FILES), "nodesAndEdges").get(0));
LineNumberReader inputLine = new LineNumberReader(inputFile);
// Read nodes up to symbol #
execTime = System.nanoTime();
while ((actualLine=inputLine.readLine()).charAt(0) != '#'){
StringTokenizer myTokenizer = new StringTokenizer(actualLine);
// Read node number
String actualNodeNumber = myTokenizer.nextToken();
// Read node name
String actualNodeName = myTokenizer.nextToken() + " " + myTokenizer.nextToken();
//Read property
myTokenizer.nextToken();
String actualNodePropertyKey = BIRTHDAY_KEY;
String actualNodePropertyValue = myTokenizer.nextToken();
actualNodePropertyValue = actualNodePropertyValue.substring(1, actualNodePropertyValue.length()-1);
// Insert node information
properties = MapUtil.map("name", actualNodeName, "birthday", actualNodePropertyValue, "id", actualNodeNumber);
long node = myInserter.createNode(properties);
nodes.add(node);
persons.flush();
}
// Read edges up to end of file
int countEdges = 0;
while ((actualLine=inputLine.readLine()) != null){
StringTokenizer myTokenizer = new StringTokenizer(actualLine);
// Read start node number
String actualStartNodeNumber = myTokenizer.nextToken();
// Read destination node number
String actualDestinationNodeNumber = myTokenizer.nextToken();
// Read relationship type
String actualRelType = myTokenizer.nextToken();
// Insert node information into ArrayList
int positionStartNode = Integer.parseInt(actualStartNodeNumber);
int positionDestinationNode = Integer.parseInt(actualDestinationNodeNumber);
properties.clear();
if (countEdges == 0) {
myInserter.createRelationship(0, nodes.get(positionStartNode-1), RelType.ROOT, properties);
myInserter.createRelationship(nodes.get(positionStartNode-1), nodes.get(positionDestinationNode-1), RelType.KNOWS, properties);
}
else
{
myInserter.(nodes.get(positionStartNode-1), nodes.get(positionDestinationNode-1), RelType.KNOWS, properties);
}
countEdges++;
}
indexProvider.shutdown();
myInserter.shutdown();
execTime = System.nanoTime() - execTime;
// Close input file
inputLine.close();
inputFile.close();
}
catch (Throwable e){
System.out.println(e.getMessage());
e.printStackTrace();
}
return execTime;
}
Your lacks a call to profiles.add(node, <indexProperties>). Therefore you're never adding anything to the index.
It is crucial that the code using Batchinserter API calls shutdown() on both the BatchInserterIndexProvider and BatchInserter. Maybe you've missed this in your code.
If this does not solve the problem, please post your code.
I am struggling with a JQL query.
We have a custom field called 'Build Reported' which is a text field. It has values like '4.7.323H', '5.1.123L', '3.1.456E', etc.
I need to write a simple query that will give me all issues reported after the user-specified version.
JQL function prototype: searchIssues('Build Integrated', '>', '4.7.323B')
To do this, I am firing a JQL Query that gives me the Build Reported for all the issues, I then iterate through each issue and perform a char-by-char comparison to determine if the Build Reported version of the current issue is greater than the one specified by the user. This seems to take too long to execute since I have to retrieve all the issues from jira database.
Is there a faster way to achieve this? Here is what I have so far:
// Get all the arguments
java.util.List args = operand.getArgs();
CustomField cf = customFieldManager.getCustomFieldObjectByName((String)args.get(0));
Long cfID = cf.getIdAsLong();
String operator = (String)args.get(1);
String userVersion = (String)args.get(2);
String jiraVersion = "";
java.util.List issues;
Iterator issuesIterator;
Issue issue;
issues = getAllIssues(user, interestedInVersion, cfID);
issuesIterator = issues.iterator();
// Iterate over all the issues
while(issuesIterator.hasNext())
{
issue = (Issue)issuesIterator.next();
// Get the Build reported value
jiraVersion = (String)issue.getCustomFieldValue(cf);
if(jiraVersion != null &&
!jiraVersion.equals(""))
{
// Compare user-specified version to the one retrieved from database
if(compareVersions(jiraVersion, userVersion, operator))
{
// Add the issue to the result set
literals.add(new QueryLiteral(operand, issue.getId()));
}
}
}
// cfID is the ID for the custom field Build Reported
private java.util.List getAllIssues(User user, Long cfID) throws SearchException, ParseException
{
JqlQueryBuilder builder = JqlQueryBuilder.newBuilder();
builder.where().project("SDEV").and().customField(cfID).isNotEmpty();
Query query = builder.buildQuery();
SearchResults results = searchService.search(user, query, PagerFilter.getUnlimitedFilter());
return results.getIssues();
}
Please note that I do not have any other filters that I could use for the JQL Query Builder to help me reduce the size of the result set.
I found an alternative to the issue I described in my question. Instead of using JQL, I ended up firing a direct SELECT and this turned out to be way quicker. The function below is a part of the JQL Plugin. Here is the code:
This is what I ended up doing:
public java.util.List getValues(#NotNull QueryCreationContext queryCreationContext, #NotNull FunctionOperand operand, #NotNull TerminalClause terminalClause)
{
try
{
// User
User user = queryCreationContext.getUser();
// Args
java.util.List args = operand.getArgs();
CustomField cf = customFieldManager.getCustomFieldObjectByName((String)args.get(0));
Long cfID = cf.getIdAsLong();
String operator = (String)args.get(1);
String userVersion = (String)args.get(2);
// Locals
java.util.List literals = new java.util.LinkedList();
MutableIssue issue = null;
String issueId = "";
String jiraVersion = "";
// DB
Connection conn = null;
String url = "jdbc:jtds:sqlserver://*****:*****/jiradb";
String driver = "net.sourceforge.jtds.jdbc.Driver";
String userName = "*******";
String password = "*******";
String sqlQuery = null;
Statement statement = null;
ResultSet resultSet = null;
Class.forName(driver).newInstance();
conn = DriverManager.getConnection(url, userName, password);
// Get all the issues that has the custom field set
sqlQuery = " SELECT t2.id AS IssueId, t1.stringvalue AS JiraVersion " + "\n" +
" FROM jiradb.jiraschema.customfieldvalue t1 " + "\n" +
" INNER JOIN jiradb.jiraschema.jiraissue t2 " + "\n" +
" ON t1.issue = t2.id " + "\n" +
" WHERE t1.customfield = " + Long.toString(cfID) + " " + "\n" +
" AND t1.stringvalue IS NOT NULL " + "\n" +
" AND t1.stringvalue != '' " + "\n" +
" AND t2.pkey LIKE 'SDEV-%' ";
// Iterate over the result set
statement = conn.createStatement();
resultSet = statement.executeQuery(sqlQuery);
while (resultSet.next())
{
issueId = resultSet.getString("IssueId");
jiraVersion = resultSet.getString("JiraVersion");
// Compare the version from jira with the user provided version
// This is my own function that does char-by-char comparison
if(compareVersions(jiraVersion, userVersion, operator))
{
// Get the issue object to add to the result
issue = ComponentManager.getInstance().getIssueManager().getIssueObject(Long.parseLong(issueId));
// Add the issue to the result
literals.add(new QueryLiteral(operand, issue.getId()));
}
}
// Return all the matching issues here
return literals;
}
catch(Exception e)
{
// Exception handling
}
return null;
}
And this is how the plugin is used:
issue in searchIssues('Build Reported', '>', '5.1.104');