I'm trying to access the keywords search terms report, and apply query on it by "Add/Exclude" column, cost, etc.
Couldn't find it in the docs, there is any way to get the report?
Thanks
Edit:
There is an existing option to save to search report and scheduled it, so if there is a chance to access the reports sections it would be great either.
Take a look at this solution.
You're able to acquire the data from the search query report through an AWQL query. Example as used in the linked solution:
var report = AdWordsApp.report(
"SELECT Query,Clicks,Cost,Ctr,ConversionRate,CostPerConversion,Conversions,CampaignId,AdGroupId " +
" FROM SEARCH_QUERY_PERFORMANCE_REPORT " +
" WHERE " +
" Conversions > 0" +
" AND Impressions > " + IMPRESSIONS_THRESHOLD +
" AND AverageCpc > " + AVERAGE_CPC_THRESHOLD +
" DURING LAST_7_DAYS");
Related
I have a query and pattern which I used to run with different parameter values to identify and create nodes. I wanted to make writing with the query simpler, so I put the query in a stored procedure, compiled the jar, and began my processing.
While easier to call the stored procedure and pass the parameters, the time to execute was MUCH slower (around 10 times slower), and was getting progressively worse as I loaded more and more data into the graph. When I switched back to using the raw queries (and more copy/paste) my time to execute dropped back down.
It feels as if the database is recompiling and/or replanning every time the query in the stored procedure calls is run.
Is there a way to cache the query from the stored procedure?
From what I can tell, my code is identical from inside, and outside the stored procedure. The stored procedure runs, just very very slow when compared to calling the cypher outside of the procedure.
Below is my raw cypher query
with ['register'] as verbs match (e:Entity {type:'PRODUCT', graphId: $graphId})
USING INDEX e:Entity(graphId)
with e, verbs
match (e)-[:REFERS]->(eWord:Word {graphId:$graphId})<-[:OBJ|OBL|NMOD]-(vb:Word {graphId: $graphId})-[]->(notWord:Word {graphId: $graphId})
USING INDEX vb:Word(graphId)
USING INDEX notWord:Word(graphId)
where vb.lemma in verbs
create (event:Event {graphId: $graphId, type: 'registerFail'})
with event, e, vb, notWord
merge (event)-[:TRIGGER]->(vb)
merge (event)-[:TRIGGER]->(notWord)
merge (event)-[:RELATED_PRODUCT]->(e)
with event
match (event)-[:TRIGGER]->(word:Word {graphId: $graphId})-[:COMPOUND|COMPOUND_PRT]->(compWord:Word {graphId: $graphId})
USING INDEX word:Word(graphId)
USING INDEX compWord:Word(graphId)
merge (event)-[:TRIGGER]->(compWord);
Here is the code to my stored procedure
#Procedure(name = "ie.createInabilityCypher", mode = Mode.WRITE)
public void createInabilityFromProduct(#Name("listOfVerbs") List<String> verbs, #Name("inabilityType") String inabilityType, #Name("graphId") String graphId) {
String cypherQuery = "" +
"with $verbsList as verbs " +
"match (e:Entity {type:'PRODUCT', graphId: $graphId}) " +
"USING INDEX e:Entity(graphId) " +
"with e, verbs " +
"match (e)-[:REFERS]->(eWord:Word {graphId:$graphId})<-[:OBJ|OBL|NMOD]-(vb:Word {graphId: '" + graphId +"'})-[]->(notWord:Word {graphId: $graphId}) " +
"USING INDEX vb:Word(graphId) " +
"USING INDEX notWord:Word(graphId) " +
"where vb.lemma in verbs " +
"create (event:Event {graphId: $graphId, type: $inabilityType}) " +
"with event, e, vb, notWord " +
"merge (event)-[:TRIGGER]->(vb) " +
"merge (event)-[:TRIGGER]->(notWord) " +
"merge (event)-[:RELATED_PRODUCT]->(e) " +
"with event " +
"match (event)-[:TRIGGER]->(word:Word {graphId: $graphId})-[:COMPOUND|COMPOUND_PRT]->(compWord:Word {graphId: $graphId}) " +
"USING INDEX word:Word(graphId) " +
"USING INDEX compWord:Word(graphId) " +
"merge (event)-[:TRIGGER]->(compWord)";
Map<String, Object> params = new HashMap<>();
params.put("graphId", graphId);
params.put("verbsList", verbs);
params.put("inabilityType", inabilityType);
tx.execute(cypherQuery, params);
You missed one place where the $graphId parameter should be used. That is why the Cypher code is being "recompiled" every time.
Try replacing this snippet:
(vb:Word {graphId: '" + graphId +"'})
with this:
(vb:Word {graphId: $graphId})
We are planning to use PetaPoco in our project. As it is tiny ORM with performance benefit, one of the thing worried me a lot is hard coded queries.
Due to that, whenever our column changes (added/removed), we have issue to find out same in all the queries.
Can we generate Columnname in tt file and use that in place of hard coded query something like:
"Select * from " + Tables.Customer + " Where " + CustomerTable.CustomerId + " = 1"
I have a use case where I create a new relationship every time a user sees a photo like this:
var dateParams = new { Date = DateTime.Now.ToString() };
graphClient.Cypher
.Match("(user:User), (photo:Photo)")
.Where((UserEntity user) => user.Id == userId)
.AndWhere((PhotoEntity photo) => photo.Id == photoId)
.CreateUnique("user-[:USER_SEEN_PHOTO {params}]->photo")
.WithParam("params", dateParams)
.ExecuteWithoutResults();
With many concurrent users this will happen very very often so I need to be able too queue a number of write operations and execute them together at once. Unfortunately I havn't been able to find good info about how to do this in the most efficient way with Neo4jClient so all suggestions would be very appreciated :)
--- UPDATE ---
So I tried different combinations but I still havn't found anything which works. Below query gives me a "PatternException: Unbound pattern!"?
var query = graphClient.Cypher;
for (int i = 0; i < seenPhotosList.Count; i++)
{
query = query.CreateUnique("(user" + i + ":User {Id : {userId" + i + "} })-[:USER_SEEN_PHOTO]->(photo" + i + ":Photo {Id : {photoId" + i + "} })")
.WithParam("userId" + i, seenPhotosList[i].UserId)
.WithParam("photoId" + i, seenPhotosList[i].PhotoId);
}
query.ExecuteWithoutResults();
I also tried to change CreateUnique to Merge and that query executes without exception but is creating new nodes instead of connecting the existing ones?
var query = graphClient.Cypher;
for (int i = 0; i < seenPhotosList.Count; i++)
{
query = query.Merge("(user" + i + ":User {Id : {userId" + i + "} })-[:USER_SEEN_PHOTO]->(photo" + i + ":Photo {Id : {photoId" + i + "} })")
.WithParam("userId" + i, seenPhotosList[i].UserId)
.WithParam("photoId" + i, seenPhotosList[i].PhotoId);
}
query.ExecuteWithoutResults();
I set up 5 types of relationships using Batch Insert. It runs extremely fast, but not sure how you'd manage the interupt in a multiuser environment. You need to know the nodeIDs in advance and then create a string for the API request that looks like this ...
[{"method":"POST","to":"/node/222/relationships","id":222,"body":{"to":"26045","type":"mother"}},
{"method":"POST","to":"/node/291/relationships","id":291,"body":{"to":"26046","type":"mother"}},
{"method":"POST","to":"/node/389/relationships","id":389,"body":{"to":"26047","type":"mother"}},
{"method":"POST","to":"/node/1031/relationships","id":1031,"body":{"to":"1030","type":"wife"}},
{"method":"POST","to":"/node/1030/relationships","id":1030,"body":{"to":"1031","type":"husband"}},
{"method":"POST","to":"/node/1034/relationships","id":1034,"body":{"to":"26841","type":"father"}},
{"method":"POST","to":"/node/34980/relationships","id":34980,"body":{"to":"26042","type":"child"}}]
I also broke this down into reasonable sized iterative requests to avoid memory challenges. But the iterations run very fast for setting up the strings needed. Getting nodeIDs also required iterations because Neo4j limits the number of nodes returned to 1000. This is a shortcoming of Neo4j that was designed based on visualization concerns (who can study a picture with 10000 nodes?) rather than coding issues such as those we are discussing.
I do not believe Neo4j/cypher has any built-in way of performing what you are asking for. What you could do is build something that does this for you with a queueing system. Here's a blog post on doing scalable writes in Ruby which is something that you could implement in your language to handle doing batch inserts/updates.
I'm using neo4j, storing a simple "content has-many tags" data structure.
I'd like to find out "what tags co-exist with what other tags the most?"
I've got around 500K content-to-tag relationships, so unfortunately, that works out to 0.5M^2 posible coexist relationships, and then you need to count how many each type of relationship happens! Or do you? Am I doing this the long way?
It never seems to return, and my CPU is pegged out for quite some time now.
final ExecutionResult result = engine.execute(
"START metag=node(*)\n"
+ "MATCH metag<-[:HAS_TAG]-content-[:HAS_TAG]->othertag\n"
+ "WHERE metag.name>othertag.name\n"
+ "RETURN metag.name, othertag.name, count(content)\n"
+ "ORDER BY count(content) DESC");
for (Map<String, Object> row : result) {
System.out.println(row.get("metag.name") + "\t" + row.get("othertag.name") + "\t" + row.get("count(content)"));
}
You should try to decrease your bound points to make the traversal faster. I assume your graph will always have more tags than content so you should make the content your bound points. Something like
start
content = node:node_auto_index(' type:"CONTENT" ')
match
metatag<-[:HAS_CONTENT]-content-[:HAS_CONTENT]->othertag
where
metatag<>othertag
return
metatag.name, othertag.name, count(content)
The actual problem I was originally doing was converting a hash of arrays into an array of hashes. If anyone has any comments on doing the conversion then that's fine but the actual question I have is why the order of the hash keys change after editing them.
I'm fully aware of this question but this is not a duplicate. In fact I'm just having specific toruble in the order they are coming out in.
I have one array and one hash.
The array (#headers) contains a list of keys. #contents is a Hash filled with arrays. As explained, my task is to get an Array of hashes. So here's my code, pretty straightforward.
#headers = (params[:headers])
puts "ORIGINAL PARAMS"
puts YAML::dump(#headers)
#contentsArray = [] #The purpose of this is to contain hashes of each object
#contents.each_with_index do |page,contentIndex|
#currentPage = Hash.new
#headers.each_with_index do |key, headerIndex|
#currentPage[key] ="testing"
end
puts YAML::dump(#currentPage)
#contentsArray[contentIndex] =#currentPage
end
puts "UPDATED CONTENTS"
puts YAML::dump(#contentsArray[0])
Heres the bit I cant wrap my head around. The keys of the original params are in a different order than the updated one.
Note the :puts "ORIGINAL PARAMS" & puts "UPDATED CONTENTS" parts. This is their output:
ORIGINAL PARAMS
---
- " Page Title "
- " WWW "
- " Description "
- " Keywords "
- " Internal Links "
- " External Links "
- " Content files "
- " Notes "
---
UPDATED CONTENTS
---
" WWW ": page
" Internal Links ": testing
" External Links ": testing
" Description ": testing
" Notes ": testing
" Content files ": testing
" Page Title ": testing
" Keywords ": testing
Why is this?
for the record. Printing #currentPage after the header loop gives this:
" WWW ": page
" Internal Links ": page
" External Links ": page
" Description ": page
" Notes ": page
" Content files ": page
" Page Title ": page
" Keywords ": page
So it must be the way the values and keys are assigned to the #currentPage and not when It goes into the array.
In Ruby 1.8+ Hashes are UNSORTED lists
The order in which you traverse a hash by either key or value may seem arbitrary, and will generally not be in the insertion order.
While in RUBY 1.9+ they are sorted in order you push items.
Hashes enumerate their values in the order that the corresponding keys were inserted.
http://apidock.com/ruby/v1_9_2_180/Hash
This is because Ruby's Hash type uses an hash table data structure internally, and hash tables do not keep track of the order of their elements.
Ruby1.9's Hash uses a linked hash table, which keeps track of the ordre of their elements.
So in Ruby1.8 hashes are unsorted, and in Ruby1.9 hashes are sorted.