How to use ASTMatcher on a project - clang

I have written a ASTMatcher to find all global variables in a project. Now I can find all the global variables in a single file (.cpp or .h), but I have no idea how to use my ASTMatcher on a project. It cannot find variables in the included files instructed by cmakelists. How can I solve it ? Using scan-build or adding command line arguments in my ASTMatcher ? (I'm sorry about my English...)
/**********************************************************************/
I used my ASTMatcher to check its own source file, and it can find all the global variables I defined in it.Showed like this:
{
"GlobalVar":[
{
"name" : "MyToolCategory",
"type" : "int",
"loc" : "D:\\LLVM\\llvm\\tools\\clang\\tools\\extra\\match-global-variable\\MatchGlobalVariable.cpp:22:33"
},
{
"name" : "MoreHelp",
"type" : "int",
"loc" : "D:\\LLVM\\llvm\\tools\\clang\\tools\\extra\\match-global-variable\\MatchGlobalVariable.cpp:30:22"
},
{
"name" : "GlobalVarMatcher",
"type" : "int",
"loc" : "D:\\LLVM\\llvm\\tools\\clang\\tools\\extra\\match-global-variable\\MatchGlobalVariable.cpp:51:20"
},
{
"name" : "GlobalVar",
"type" : "int",
"loc" : "D:\\LLVM\\llvm\\tools\\clang\\tools\\extra\\match-global-variable\\MatchGlobalVariable.cpp:55:12"
}
]
}
But errors occurs
picture:
I wrote my ASTMatcher referencing http://bcain-llvm.readthedocs.io/projects/clang/en/latest/LibASTMatchersTutorial/

Related

Creating fields in MongoDB

I would like to change my database from SQLite to MongoDB since mongo is schema less. In SQL database i had to create multiple rows for each attribute for the same sku(a product). I have to create n number of columns since each attribute have different specifications. Now in mongo I am planning to create only one document(row) for a sku having same id. To achieve this I would like to create a field(column) for specifications like html, pdf, description, etc. Now the last field is for attributes which has different values. I would like to store it in hash.(key value pairs). Does it make sense to store all the attributes in single cell? Am I going in right direction? Someone please suggest.
EDIT:
I want something like this.
My question is, in SQL i was creating columns for each attributes like attribute 1 name, value and attribute 2 name, value. This extends the row size. Now i want to store all the attributes in hash format(as shown in the image) since MongoDB is schema less. Is it possible? And does it makes sense? Is there any better option out?
Ultimately, how you store the data should be influenced by how you intend on accessing or updating the data, but one approach would be to create an embedded attributes object within each sku/product with all attributes for that sku/product:
Based on your example:
{
"sku_id" : 14628109,
"product_name" : "TE Connectivity",
"attributes" : {
"Widhth" : [ "4", "mm" ],
"Height" : [ "56", "cm" ],
"Strain_Relief_Body_Orientation" : "straight",
"Minimum_Operating_Temperature" : [ "40" , "C" ]
}
},
{
"sku_id" : 14628110,
"product_name" : "Tuning Holder",
"attributes" : {
"Widhth" : [ "7", "mm" ],
"diametr" : [ "78", "cm" ],
"Strain_Relief_Body_Orientation" : "straight",
"Minimum_Operating_Temperature" : [ "4" , "C" ]
}
},
{
"sku_id" : 14628111,
"product_name" : "Facing Holder",
"attributes" : {
"size" : [ "56", "nos" ],
"Height" : [ "89", "cm" ],
"Strain_Relief_Body_Orientation" : "straight",
"Minimum_Operating_Temperature" : [ "56" , "C" ]
}
}

Multiple Joins In CouchDB

I am currently trying to figure out if CouchDB is suitable for my use-case and if so, how. I have a situation similar to the following:
First set of documents (let's call them companies):
{
"_id" : 1,
"name" : "Foo"
}
{
"_id" : 2,
"name" : "Bar"
}
{
"_id" : 3,
"name" : "Baz"
}
Second set of documents (let's call them projects):
{
"_id" : 4,
"name" : "FooProject1",
"company" : 1
}
{
"_id" : 5,
"name" : "FooProject2",
"company" : 1
}
...
{
"_id" : 100,
"name" : "BazProject2",
"company" : 3
}
Third set of documents (let's call them incidents):
{
"_id" : "300",
"project" : 4,
"description" : "...",
"cost" : 200
}
{
"_id" : "301",
"project" : 4,
"description" : "...",
"cost" : 400
}
{
"_id" : "302",
"project" : 4,
"description" : "...",
"cost" : 500
}
...
So in short every company has multiple projects, and every project can have multiple incidents. One reason I model the data is, that I come mainly from a SQL background, so the modelling may be completely unsuitable. The second reason is, that I would like to add new incidents very easily by just using the REST-API provided by couchdb. So the incidents have to be single documents.
However, I now would like to get a view that would allow me to calculate the total cost for each company. I can easily define a view using map-reduce and linked documents which get's me the total amount per project. However once I am at the project level I cannot get any further to the level of the company.
Is this possible at all using couchDb? This kind of summarising data sounds like a perfect use case for map-reduce. In SQL I would just do a three-table join, but it seems like in couchDb the best I can get is two-table joins.
As mentioned you cannot do joins in CouchDb but this isn't a limitation, this is an invitation to both think about your problems and approach them differently. The correct way to do this in CouchDb is to define data structures called for example : IncidentReference composed of :
The project id
And the company id
That way your data would look like :
{
"_id" : "301",
"project" : 4,
"description" : "...",
"cost" : 400,
"reference" : {
"projectId" : 1,
"companyId" : 2
}
}
This is just fine. Once you have that, you can play with Map/Reduce to achieve whatever you want easily. Generally speaking, you need to think about the way you are going to query your data.

CloudFormation Join in Tags

I'm running a CloudFormation template that uses the following snippet to tag various resources (this is a ELB tag, but others also exhibit this problem) I would expect this to produce a name tag of stackName-asgElb but it actually produces names such as olive-asg-asgElb-16GSCPHUFSWEN.
The stack name in this case was named olive-asg so I was expecting olive-asg-asgElb, without the -16GSCPHUFSWEN on the end.
Does anybody know where the seemingly random string on the end comes from?
CF template snippet:
Tags: [
{
Key: "Name",
Value: {
"Fn::Join": [
"-",
[
{
Ref: "AWS::StackName"
},
"asgElb"
]
]
}
}
]
That's interesting, I just tried it and I'm not able to reproduce the same results that you're seeing. It seems to be working as expected.
Here's the snippet I'm using in its entirety:
"ElasticLoadBalancer" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties" : {
"AvailabilityZones" : { "Fn::GetAZs" : "" },
"CrossZone" : "true",
"Listeners" : [ {
"LoadBalancerPort" : "80",
"InstancePort" : "80",
"Protocol" : "HTTP"
} ],
"Tags" : [
{
"Key" : "Name",
"Value" : { "Fn::Join" : [ "-", [ { "Ref" : "AWS::StackName" }, "MyELB"] ] }
}
]
}
},
The one noticeable difference I see in yours is that you're missing some of the quotes around the Tag stanza.
I feel foolish, the name tags are set correctly, I was looking at the physical IDs, not the name tags. The docs explaining how to control physical IDs are here.
Thanks to #alanwill for testing, and forcing me to go back through all the steps carefully!

Is there a way to to see what dates/times tickets changed Status via the JIRA API?

I am trying to run queries against the JIRA API and get results in which I can see the dates and times that each issue went through a status change.
E.g.: Run a query to grab all issues with a certain assignee and see, along with the rest of the information, timestamps for when each issue changed from "Open" to "Resolved".
Is this possible?
EDIT: I have tried expanding the changelog, but while that tells me what status changes a ticket went through (e.g., that the particular ticket transitioned from "Open" to "Resolved" and then from "Resolved" to "Closed"), it doesn't tell me WHEN these transitions occurred.
Turns out that each of the transition objects showing the status changes have a "created" field that contains the time and date the transition occurred, which I feel is a bit of a misnomer, but there it is. An example object inside the "histories" array in the expanded changelog object:
{ "author" : { "active" : true,
"avatarUrls" : { "16x16" : "https://company.jira.com/secure/useravatar?size=xsmall&avatarId=10072",
"24x24" : "https://company.jira.com/secure/useravatar?size=small&avatarId=10072",
"32x32" : "https://company.jira.com/secure/useravatar?size=medium&avatarId=10072",
"48x48" : "https://company.jira.com/secure/useravatar?avatarId=10072"
},
"displayName" : "First Last",
"emailAddress" : "first.last#company.com",
"name" : "first.last",
"self" : "https://company.jira.com/rest/api/2/user?username=first.last"
},
"created" : "2013-04-17T16:21:13.540-0400",
"id" : "24451",
"items" : [ { "field" : "status",
"fieldtype" : "jira",
"from" : "5",
"fromString" : "Resolved",
"to" : "6",
"toString" : "Closed"
},
{ "field" : "assignee",
"fieldtype" : "jira",
"from" : "old.assignee",
"fromString" : "Old Assignee",
"to" : "first.last",
"toString" : "First Last"
}
]
}

can neo4j find the shortest n paths by Traversals with index?

I read the wiki api from http://docs.neo4j.org/chunked/snapshot/rest-api-traverse.html
and check my code, i can find the shortest n paths by Traversals ,and can find nodes or relationships with index. but my projects has 300M nodes ,when i find shortest n paths by Traversals ,like retionship data property Name contain 'hi' ,if i use neo4j's fiter method,it is really slow,i want use index(i created it!),code like:
{
"order" : "breadth_first",
"return_filter" : {
"body" : "position.endNode().getProperty('name').toLowerCase().contains('t')",
"language" : "javascript"
},
"prune_evaluator" : {
"body" : "position.length() > 10",
"language" : "javascript"
},
"uniqueness" : "node_global",
"relationships" : [ {
"direction" : "all",
"type" : "knows"
}, {
"direction" : "all",
"type" : "loves"
} ],
"max_depth" : 3
}
i want :
{
"order" : "breadth_first",
"return_filter" : {
"body" : "position.endNode().name:*hi*",
"language" : "javascript"
},
"prune_evaluator" : {
"body" : "position.length() > 10",
"language" : "javascript"
},
"uniqueness" : "node_global",
"relationships" : [ {
"direction" : "all",
"type" : "knows"
}, {
"direction" : "all",
"type" : "loves"
} ],
"max_depth" : 3
}
can someone help me ?
Is it slow the first time or are consecutive requests equally slow? Properties are loaded per node/relationship the first time any property is requested for that node/relationship and maybe you're seeing that performance hit.
I think that using an index would help for nodes that haven't been loaded yet, but not otherwise. Doing this in rest could be tricky in that you'd have to do index lookup before hand and pass that list into the evaluator. But that doesn't scale. Instead could write an extension for this?

Resources