I want to find all removed adgroups and campaigns. My query is as follows:
var report = AdWordsApp.report(
'SELECT Query, Clicks, Cost, Ctr, ConversionRate,' +
' CampaignStatus, AdGroupStatus, CostPerConversion, Conversions, CampaignId, CampaignName, AdGroupId, AdGroupName' +
' FROM SEARCH_QUERY_PERFORMANCE_REPORT' +
' WHERE ' +
'Impressions > 2' +
'AND AdGroupStatus = "removed"' +
' DURING LAST_7_DAYS', REPORTING_OPTIONS);
when commenting out this line 'AND AdGroupStatus = "removed"' + then the query works.
I tried using 'AND AdGroupStatus CONTAINS "removed"' + without success.
Can somebody help me with this?
I should be able to filter like mentioned in the documentation. https://developers.google.com/adwords/api/docs/appendix/reports/search-query-performance-report
I believe that you have to use the status in capital letters and , so you should whirte: AND AdGroupStatus = "REMOVED" not contains.
https://developers.google.com/adwords/api/docs/guides/awql
Related
I'm trying to implement an event time temporal join in Flink. Here's the first join table:
tEnv.executeSql("CREATE TABLE AggregatedTrafficData_Kafka (" +
"`timestamp` TIMESTAMP_LTZ(3)," +
"`area` STRING," +
"`networkEdge` STRING," +
"`vehiclesNumber` BIGINT," +
"`averageSpeed` INTEGER," +
"WATERMARK FOR `timestamp` AS `timestamp`" +
") WITH (" +
"'connector' = 'kafka'," +
"'topic' = 'seneca.trafficdata.aggregated'," +
"'properties.bootstrap.servers' = 'localhost:9092'," +
"'properties.group.id' = 'traffic-data-aggregation-job'," +
"'format' = 'json'," +
"'json.timestamp-format.standard' = 'ISO-8601'" +
")");
The table is used as a sink for the following query:
Table aggregatedTrafficData = trafficData
.window(Slide.over(lit(30).seconds())
.every(lit(15).seconds())
.on($("timestamp"))
.as("w"))
.groupBy($("w"), $("networkEdge"), $("area"))
.select(
$("w").end().as("timestamp"),
$("area"),
$("networkEdge"),
$("plate").count().as("vehiclesNumber"),
$("speed").avg().as("averageSpeed")
);
Here's the other join table. I use Debezium to stream a Postgres table into Kafka:
tEnv.executeSql("CREATE TABLE TransportNetworkEdge_Kafka (" +
"`timestamp` TIMESTAMP_LTZ(3) METADATA FROM 'value.source.timestamp' VIRTUAL," +
"`urn` STRING," +
"`flow_rate` INTEGER," +
"PRIMARY KEY(`urn`) NOT ENFORCED," +
"WATERMARK FOR `timestamp` AS `timestamp`" +
") WITH (" +
"'connector' = 'kafka'," +
"'topic' = 'seneca.network.transport_network_edge'," +
"'scan.startup.mode' = 'latest-offset'," +
"'properties.bootstrap.servers' = 'localhost:9092'," +
"'properties.group.id' = 'traffic-data-aggregation-job'," +
"'format' = 'debezium-json'," +
"'debezium-json.schema-include' = 'true'" +
")");
Finally here's the temporal join:
Table transportNetworkCongestion = tEnv.sqlQuery("SELECT AggregatedTrafficData_Kafka.`timestamp`, `networkEdge`, " +
"congestion(`vehiclesNumber`, `flow_rate`) AS `congestion` FROM AggregatedTrafficData_Kafka " +
"JOIN TransportNetworkEdge_Kafka FOR SYSTEM_TIME AS OF AggregatedTrafficData_Kafka.`timestamp` " +
"ON AggregatedTrafficData_Kafka.`networkEdge` = TransportNetworkEdge_Kafka.`urn`");
The problem I'm having is that the join works only for the first few second (after an update in the Postgres table), but I need to continuosly join the first table with debezium one. Am I doing something wrong?
Thanks
euks
Temporal joins using the AS OF syntax you're using require:
an append-only table with a valid event-time attribute
an updating table with a primary key and a valid event-time attribute
an equality predicate on the primary key
When Flink SQL's temporal operators are applied to event time streams, watermarks play a critical role in determining when results are produced, and when the state is cleared.
When performing a temporal join:
rows from the append-only table are buffered in Flink state until the current watermark of the join operator reaches their timestamps
for the versioned table, for each key the latest version whose timestamp precedes the join operator's current watermark is kept in state, plus any versions from after the current watermark
whenever the join operator's watermark advances, new results are produced, and state that's no longer relevant is cleared
The join operator tracks the watermarks it receives from its input channels, and its current watermark is always the minimum of these two watermarks. This is why your join stalls, and only makes progress when the flow_rate is updated.
One way to fix this would be to set the watermark for the TransportNetworkEdge_Kafka table like this:
"WATERMARK FOR `timestamp` AS " + Watermark.MAX_WATERMARK
This will set the watermark for this table/stream to the largest possible value, which will have the effect of making the watermarks from this stream irrelevant -- this stream's watermarks will never be the smallest.
This will, however, have the drawback of making the join results non-deterministic.
I have a table that contains run numbers. This is then linked to a second table that contains the serial numbers of the panels that go through each run. I was wondering is it possible to design a query that will give for each run number the serial numbers.
I would like it in a table like:
Run Number1, First Serial Number for 1, Second Serial Number for 1, etc..
Run Number2, First Serial Number for 2, Second Serial Number for 2, etc..
I can get in in the form:
Run Number1, First Serial Number for 1
Run Number1, Second Serial Number for 1
Run Number2, First Serial Number for 2
Run Number2, Second Serial Number for 2
Is there a way to set this up?
You can use my DJoin function as this will accept SQL as the source, thus you won't need additional saved queries:
' Returns the joined (concatenated) values from a field of records having the same key.
' The joined values are stored in a collection which speeds up browsing a query or form
' as all joined values will be retrieved once only from the table or query.
' Null values and zero-length strings are ignored.
'
' If no values are found, Null is returned.
'
' The default separator of the joined values is a space.
' Optionally, any other separator can be specified.
'
' Syntax is held close to that of the native domain functions, DLookup, DCount, etc.
'
' Typical usage in a select query using a table (or query) as source:
'
' Select
' KeyField,
' DJoin("[ValueField]", "[Table]", "[KeyField] = " & [KeyField] & "") As Values
' From
' Table
' Group By
' KeyField
'
' The source can also be an SQL Select string:
'
' Select
' KeyField,
' DJoin("[ValueField]", "Select ValueField From SomeTable Order By SomeField", "[KeyField] = " & [KeyField] & "") As Values
' From
' Table
' Group By
' KeyField
'
' To clear the collection (cache), call DJoin with no arguments:
'
' DJoin
'
' Requires:
' CollectValues
'
' 2019-06-24, Cactus Data ApS, Gustav Brock
'
Public Function DJoin( _
Optional ByVal Expression As String, _
Optional ByVal Domain As String, _
Optional ByVal Criteria As String, _
Optional ByVal Delimiter As String = " ") _
As Variant
' Expected error codes to accept.
Const CannotAddKey As Long = 457
Const CannotReadKey As Long = 5
' SQL.
Const SqlMask As String = "Select {0} From {1} {2}"
Const SqlLead As String = "Select "
Const SubMask As String = "({0}) As T"
Const FilterMask As String = "Where {0}"
Static Values As New Collection
Dim Records As DAO.Recordset
Dim Sql As String
Dim SqlSub As String
Dim Filter As String
Dim Result As Variant
On Error GoTo Err_DJoin
If Expression = "" Then
' Erase the collection of keys.
Set Values = Nothing
Result = Null
Else
' Get the values.
' This will fail if the current criteria hasn't been added
' leaving Result empty.
Result = Values.Item(Criteria)
'
If IsEmpty(Result) Then
' The current criteria hasn't been added to the collection.
' Build SQL to lookup values.
If InStr(1, LTrim(Domain), SqlLead, vbTextCompare) = 1 Then
' Domain is an SQL expression.
SqlSub = Replace(SubMask, "{0}", Domain)
Else
' Domain is a table or query name.
SqlSub = Domain
End If
If Trim(Criteria) <> "" Then
' Build Where clause.
Filter = Replace(FilterMask, "{0}", Criteria)
End If
' Build final SQL.
Sql = Replace(Replace(Replace(SqlMask, "{0}", Expression), "{1}", SqlSub), "{2}", Filter)
' Look up the values to join.
Set Records = CurrentDb.OpenRecordset(Sql, dbOpenSnapshot)
CollectValues Records, Delimiter, Result
' Add the key and its joined values to the collection.
Values.Add Result, Criteria
End If
End If
' Return the joined values (or Null if none was found).
DJoin = Result
Exit_DJoin:
Exit Function
Err_DJoin:
Select Case Err
Case CannotAddKey
' Key is present, thus cannot be added again.
Resume Next
Case CannotReadKey
' Key is not present, thus cannot be read.
Resume Next
Case Else
' Some other error. Ignore.
Resume Exit_DJoin
End Select
End Function
' To be called from DJoin.
'
' Joins the content of the first field of a recordset to one string
' with a space as delimiter or an optional delimiter, returned by
' reference in parameter Result.
'
' 2019-06-11, Cactus Data ApS, Gustav Brock
'
Private Sub CollectValues( _
ByRef Records As DAO.Recordset, _
ByVal Delimiter As String, _
ByRef Result As Variant)
Dim SubRecords As DAO.Recordset
Dim Value As Variant
If Records.RecordCount > 0 Then
While Not Records.EOF
Value = Records.Fields(0).Value
If Records.Fields(0).IsComplex Then
' Multi-value field (or attachment field).
Set SubRecords = Records.Fields(0).Value
CollectValues SubRecords, Delimiter, Result
ElseIf Nz(Value) = "" Then
' Ignore Null values and zero-length strings.
ElseIf IsEmpty(Result) Then
' First value found.
Result = Value
Else
' Join subsequent values.
Result = Result & Delimiter & Value
End If
Records.MoveNext
Wend
Else
' No records found with the current criteria.
Result = Null
End If
Records.Close
End Sub
Full documentation can be found in my article:
Join (concat) values from one field from a table or query
If you don't have an account, browse to the link: Read the full article.
Code is also on GitHub: VBA.DJoin
Here i want to fetch some result from table for that I i have writter Sp as like below.
create proc GetData
(
#tableName nvarchar(max),
#groupLetter nvarchar(max)
)
as
begin
EXEC('Select * from ' + #tablename + 'where LastName LIKE'''+'%'+#groupLetter+'%'+'''ORDER BY LastName')
end
to this SP i am passing table name and the text to find a result.
this create SP successfully but gives error while executing it.
this is the way i executes SP.
EXEC GetData Employees,ab
and am getting error as below.
Incorrect syntax near the keyword 'LIKE'.
It's just a syntax error in your string (not enough spaces). It should be:
begin
EXEC('Select * from ' + #tablename + ' where LastName LIKE '''+'%'+#groupLetter+'%'+''' ORDER BY LastName')
end
Your code should look like this:
declare #sql nvarchar(max) = N'Select * from ' + #tablename + N' where LastName LIKE '''+N'%'+#groupLetter+N'%'+N''' ORDER BY LastName'
EXEC(#sql);
The first error that you had was you missed some spaces between keywords of your query, the second error: exec does not accept string concatenation so you should construct your query in a variable and then pass it into exec.
can i do JOIN with two querys with yql, i have two querys:
select *
from yahoo.finance.historicaldata
where symbol in ('YHOO')
and startDate='" + startDate + "'
and endDate='" + endDate + "'&format=json&diagnostics=true&env=store://datatables.org/alltableswithkeys&callback="
and
select symbol,
Earnings_per_Share,
Dividend_Yield,
week_Low,
Week_High,
Last_Trade_Date,
open,
low,
high,
volume,
Last_Trade
from csv where url="http://download.finance.yahoo.com/d/quotes.csv?s=YHOO,GOOG&f=seyjkd1oghvl1&e=.csv"
and columns="symbol,Earnings_per_Share,Dividend_Yield,Last_Trade_Date,week_Low,Week_High,open,low,high,volume,Last_Trade"
i need to convert this two querys to one. how to do this?
I'm using Delphi XE2 with AnyDac Components and Advantage Database 10.
In my code I'm using parametrized querys like this:
q.SQL.Text := 'SELECT * FROM Table1 ' +
'LEFT JOIN Table2 ON (Table1.ID = Table1 .ID_Table2) ' +
'WHERE ' +
':0 BETWEEN Table1.StartAm AND Table1.EndeAm ' +
'AND Table2 = :1';
q.Params[0].Value := AStartDateTime;
q.Params[1].Value := AIDRessourcenGruppe;
q.Open;
this ends up in an exception:
Exception der Klasse EADSNativeException mit der Meldung
'[AnyDAC][Phys][ADS] Error 7200: AQE Error: State = 22018;
NativeError = 2112; [iAnywhere Solutions][Advantage SQL
Engine]Assignment error' aufgetreten.
of course AStartDateTime is a valid delphi TDateTime value, AIDRessourcenGruppe is a integer value.
interestingly, these two variants work:
q.SQL.Text := 'SELECT * FROM Table1 ' +
'LEFT JOIN Table2 ON (Table1.ID = Table1 .ID_Table2) ' +
'WHERE ' +
':0 BETWEEN Table1.StartAm AND Table1.EndeAm ' +
'AND Table2 = :1';
q.Params[0].AsDateTime:= AStartDateTime;
q.Params[1].AsInteger:= AIDRessourcenGruppe;
q.Open;
-
q.SQL.Text := 'SELECT * FROM Table1 ' +
'LEFT JOIN Table2 ON (Table1.ID = Table1 .ID_Table2) ' +
'WHERE ' +
':SomeDate BETWEEN Table1.StartAm AND Table1.EndeAm ' +
'AND Table2 = :ID_PT_Ressourcengruppe';
q.ParamByName('SomeDate').Value := AStartDateTime;
q.ParamByName('ID_PT_Ressourcengruppe').Value := AIDRessourcenGruppe;
q.Open;
Do I miss something? Thanks for any help!
Answer - just a guess:
I would say the indexed write access to Value property of the parameters doesn't determine data type of parameters whilst the named access does. And if that's correct, then you're in trouble because you're passing all the values through Variant type which must converted to a proper value format before the query is performed. But all of that is just my guess - I don't know AnyDAC at all!
AsType value access:
I'm posting this just because I don't like Value access to parameters to be called professional :-)
It's better to use AsType typecast at least because:
it is faster because you directly say what type you're passing to a certain parameter, thus the query parameter engine doesn't need to determine this, and in comparison with Value access, it doesn't need to convert Variant type
it's safer for you, because you can't pass e.g. string value to AsDateTime accessed parameter, so you have an extra protection against parameter mismatch
What I commend in your example, is the use of indexed access to parameters, instead of commonly used named which needs to search the parameter list before the access and which is slower.