Value of 'cost per Unit' in the Extracted report from admanager API different than that in the GAM UPI. Why and how is the conversion done? - google-ads-api

Suppose if the value of Line item (Cost per Unit) is $45.45 in the report that we get from Google Ad Manager 360 UI , the same report when extracted through a java application using ad manager API gives value '45454000' . Why is this conversion done ? is it specific to any API version? I'm using v202111 in my application.

Google Ads and Ads Manager (not sure about Campaign Manager and DV360) internally use "micros"—i.e. a millionth of the base unit of a given currency—for monetary values.
Presumably this is being done so they can use integers and don't have to deal with fixed-point math. You'll just have to divide all the values reported by the API by 1'000'000.

Related

Google geocoding API Inner Workings

I'm currently working with some large datasets that include some location based information but lack direct latitude and longitude measurements which I need in order to create visualizations.
In order to resolve this problem, I've been using geocoding APIs that require addresses or address-like information as input and provide latitude and longitude information as output.
I started by using the Nominatim API. Unfortunately, due to the nature of the address-like data that I have, many of my queries failed so I started using the Google geocoding API. The Google API provided me with a significantly higher success rate, but it is a paid API which is not ideal.
I realize that given the incredible resources that Google has that it would be virtually impossible to build a system that rivals their geocoding API within a reasonable amount of time, but it's made me wonder what's going on under the hood.
Is a BERT-like translational system at work? What happens to the text after it's sent off?
I'm using n-grams for similar usage by creating an index and an inverted index. See this package ngram
import ngram
...
country = filename.replace('.csv', '')
ind[country] = ngram.NGram()
inv[country] = {}
s_csv = csv.reader(stream, delimiter=';')
next(s_csv)
for row in s_csv:
coord = tuple(map(float, row[0:2]))
ad = ' '.join(row[2:]).lower()
ind[country].add(ad)
inv[country][ad] = (coord, address)
then you can use the find function
Take care of the memory consumption ~16GB RAM for a country like France and OSM Data
To see an implementation of that, check this OpenGeoCode HTTP API Service source code

How to access "key" in combine.perKey in beam

In How to create custom Combine.PerKey in beam sdk 2.0, I asked and got a correct answer on how to create a custom Combine.PerKey in the new beam sdk 2.0. However, I now need to create a custom combinePerKey such that within my custom CombinePerKey logic, I need to be able to access the contents of the key. This was easily possible in dataflow 1.x, but in the new beam sdk 2.0, I'm unsure how to do so. Any little code snippet/example would be extremely useful.
EDIT #1 (per Ben Chambers's request)
The real use case is hard to explain, but I'm going to try:
We have a 3d space composed of millions of little hills. We try to determine the apex of these millions of hills as follows: we create billions of "rectangular probes" for the whole 3d space, and then we ask each of these billions of probes to "move" in a greedy way to the apex. Once it hits the apex, it stops. The probe then returns the apex and itself. The apex is the KEY for which we'll do a custom combine by key.
Now, the custom combine function is going to finally return a final object (called a feature) which is derived from all the probes that reach the same apex (ie the same key). When generating this "feature" object, we need to know infomration about the final apex/key (ie the top of the hill). Hence, I need this key info.
One way to solve this is using a group by key, but that was slow (at least in df 1.x); we got it to be fast (in df 1.x) using a custom combine fn. So, we'd like the key. That said, groupbykey works in beam skd 2.0.
Alternatively, we could stick the "apex" information into the "probe" objects itself, but this means that each of our billions of probe objects now needs to be tripled in size just to hold this apex information (and this apex information repeats itself, since there are only say 1 million apexes but 1 billion probes, so this intuitively feels highly inefficient.)
Rather than relying on the CombineFn to compute the entire result, could you instead have the ComibneFn compute some partial result based only on information about the probes? Then your Combine.perKey(...) returns a PCollection<KV<Apex, InfoAboutProbes>> and you can use a ParDo to combine the information about the apex with the summary information about the probes. This allows you to use the CombineFn for efficiently combining information about many probes, while using a ParDo to access the key.

Google IMA VAST Tracking Macros

I am currently in the progress of writing a Google IMA-like implementation because IMA is not supported on various platforms I am working with.
When IMA sends tracking events, it replaces certain macro's in URLs. See the [XXXXX] values in the url below.
https://video-ad-stats.googlesyndication.com/video/client_events?event=3&web_property=ca-pub-2226646527662415&cpn=[CPN]&break_type=[BREAK_TYPE]&slot_pos=[SLOT_POS]&ad_id=[AD_ID]&ad_sys=[AD_SYS]&ad_len=[AD_LEN]&p_w=[P_W]&p_h=[P_H]&mt=[MT]&rwt=[RWT]&wt=[WT]&sdkv=[SDKV]&vol=[VOL]&content_v=[CONTENT_V]&conn=[CONN]&format=[FORMAT_NAMESPACE]_[FORMAT_TYPE]_[FORMAT_SUBTYPE]
Because I can not use IMA, I have to replace these macros myself. The google IMA website did not give any clues as to what these values should be. Some I have inferred from looking at the urls IMA creates, but some are still missing. See the full list below.
CPN:
BREAK_TYPE: The type of adbreak (1 for preroll)
SLOT_POS: Sequence number of the advertisement
AD_ID: Advertisement ID, inline-advertisement ID
AD_SYS: Advertisement system, inline-advertisement system
AD_LEN: AD length in miliseconds
P_W: Videoplayers' size width (pixels)
P_H: Videoplayers' size height (pixels)
MT:
RWT:
WT:
SDKV: SDK Version (so our own version?)
VOL: Sound Volume (not 100% sure)
CONTENT_V:
CONN:
FORMAT_NAMESPACE:
FORMAT_TYPE:
FORMAT_SUBTYPE:
What are the other macro values being used by Google IMA?
Additions
RWT seems to be a succession of four timestamps. Which timestamps not
sure. (Unix Epoch format)
WT is one single timestamp. (Unix Epoch
format)

Is MQTT support in Cumulocity?

it's possible to receive MQTT messages from Cumuloyity API?
How can I get with Java Clients the values from following Measurements:
Analog Measurement
Motion Measurement
Thanks
Querying measurements is described here: http://cumulocity.com/guides/java/developing/, Section "Accessing events and measurements". There are currently no pre-defined Java classes for analog measurements and motion measurements, however, you can still retrieve them as generic properties. Check the example on the web page and instead of
measurementFilter.byFragmentType(SignalStrength.class);
try
measurementFilter.byFragmentType("c8y_MotionMeasurement");
and instead of
measurement.get(SignalStrength.class);
try
measurement.getProperty("c8y_MotionMeasurement");
You can also create the Java classes representing the measurements on your own by "stealing" and modifying one of the existing classes:
https://bitbucket.org/m2m/cumulocity-clients-java/src/53216dc587e24476e0578b788672416e8566f92b/device-capability-model/src/main/java/c8y/?at=default

National Weather Service (NOAA) REST API returns nil for parameters of forecast

I am using the NWS REST API as my weather service for an app I am making. I was initially reluctant to use NWS because of its bad documentation, but I couldn't resist as it is offered completely free.
Now that I am trying to use it, I am running into some difficulty. When making a request for multiple days, the minimum temperature appears nil for several days.
(EDIT: As I have been testing the API more I have found that it is not always the minimum temperatures that are nil. It can be a max temp or a precipitation, it seems completely random. If you would like to make test calls using their web interface, you can do so here: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdBrowserByDay.htm
and here: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdXML.htm)
Here is an example of a request the minimum temperatures are empty: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdBrowserClientByDay.php?listLatLon=40.863235,-73.714780&format=24%20hourly&numDays=7
Surprisingly, on their website, the minimum temperatures are available:
http://forecast.weather.gov/MapClick.php?textField1=40.83&textField2=-73.70
You'll see under the Minimum temperatures that it is filled with about 5 (sometimes less, it is inconsistent) blank fields that say <value xsi:nil="true"/>
If anybody can help me it would be greatly appreciated, using the NWS API can be a little overwhelming at times.
Thanks,
The nil values, from what I can understand of the documentation, here and here, simply indicate that the data is unavailable.
Without making assumptions about NOAA's data architecture, it's conceivable that the information available via the API may differ from what their website displays.
Missing values are represented by an empty element and xsi:nil=”true” (R2.2.1).
Nil values being returned seems to involve the time period. Notice the difference between the time-layout keys (see section 5.3.2) in 1 in these requests:
k-p24h-n7-1
k-p24h-n6-1
The data times are different.
<layout-key> element
The key is derived using the following convention:
“k” stands for key.
“p24h” implies a data period length of 24 hours.
“n7” means that the number of data times is 7.
“1” is a sequential number used to keep the layout keys unique.
Here, startDate is the factor. Leaving it off includes more time and might account for some requested data not yet being available.
Per documentation:
The beginning day for which you want NDFD data. If the string is empty, the start date is assumed to be the earliest available day in the database. This input is only needed if one wants to shorten the time window data is to be retrieved for (less than entire 7 days worth), e.g. if user wants data for days 2-5.
I'm not experiencing the randomness you mention. The folks on NOAA's Yahoo! Groups forum might be able to tell you more.

Resources