I entered roughly 800 keywords in Keyword Planner for two target locations:
San Antonio TX, Texas, United States
San Jose, California, United States
The summary data for the Historical Metrics result set by Location is as follows:
I then added 18 more locations to the same keyword list to create this list of target locations.
Ann Arbor, Michigan, United States
Austin TX, Texas, United States Nielsen® DMA® regions
Cambridge, Massachusetts, United States
Cincinnati OH, United States Nielsen® DMA® regions
Columbia SC, South Carolina, United States Nielsen® DMA® regions
Fort Worth, Texas, United States
Greenville-Spartanburg-Asheville-Anderson, United States Nielsen® DMA® regions
Indianapolis IN, Indiana, United States Nielsen® DMA® regions
Jacksonville FL, United States Nielsen® DMA® regions
Miami, Florida, United States
New Haven County, Connecticut, United States county
Oakland, California, United States city
Orlando, Florida, United States city
Richmond-Petersburg VA, Virginia, United States Nielsen® DMA® regions
Salt Lake City UT, United States Nielsen® DMA® regions
San Antonio TX, Texas, United States Nielsen® DMA® regions
San Jose, California, United States
Syracuse NY, New York, United States Nielsen® DMA® regions
Trenton, New Jersey, United States city
Warsaw, Indiana, United States city
The summary data for this Historical Metrics result set by Location is as follows:
How does this make sense?
Why would San Antonio disappear from the list?
Why would the volume for San Jose go down?
If the average number of times that the 800 keywords have been viewed in San Jose is 324. Then that should be the total views, what happens in other cities should not affect that.
But 20 cities times 800 keywords gives 16000 combinations.
Google is not going to aggregate combinations that are of no interest. Therefore, only the most popular combinations get added to the aggregation. This could explain why the totals for a city go down.
It appears that your total number of Average monthly searches did go up. My guess is that they are only showing you the top 5 segments, and San Antonio was not among the first 5.
My guess is that when you added 18 more locations, many of them were more popular and pushed San Antonio down. My guess is it is there but buried in the summary results. See if you can drill down to see more results.
My guess as to why San Jose went down is due to spending limits running out. Basically, more popular search locations ate up the funds faster than San Jose and San Antonio.
That's how PPC bidding works. Google will spend the money allocated on whichever criteria matches first.
Related
Before I ask the question I just want to say that I am a backend engineer and have no experience in data science, but I'm trying to look into the machine learning solution for this problem and any sort of answer that what I'm trying to do is impossible or that I should be looking into something different is appreciated.
I'm working on a project and we currently want to normalize location data.
User has free text input, and we want to try and map that into a row in the predefined table of all the locations in the world, if possible, and get country, state and city.
So we want to map input => {country, state, city} for example
Schwäbisch Gmünd, Baden-Württemberg, Deutschland => Germany, Baden-Württemberg, Schwäbisch Gmünd
United States, Los Angeles => United States, California, Los Angeles
United States => United States, null, null
Tuscany => Italy, Tuscany, null
Wien => Austria, Vienna, Vienna
Is this possible to do this with machine learning? If it is what should we be looking into?
I am aware in Microsoft Excel, we can use Conditional Formatting to highlight and find duplicate values.
Essentially, in Google Sheets what I want to do is using a second list - find all the values in the first list that are the same as the second and then only keep duplicates.
For example, in my google sheet I have the original list as:
Shoe Brand Location
Nike USA
Adidas Europe
Lacoste Europe
Ralph Lauren USA
Under Armour USA
Umbro USA
Puma Europe
Slazenger Europe
Timberlands USA
Crocs USA
CK USA
Then the second list in the same Google sheet has this:
Shoe Brand Location
Lacoste Europe
Timberlands USA
Slazenger Europe
Nike USA
Crocs USA
Effectively, I only want to keep the shoes that appear in the second list
Thanks
If I understood well, you want to create a new table that compare your first and second tables, and keep only the duplicated shoes (i.e. any shoe that is in the second table but not in the first should be ignored)
You can therefore try something like this:
= ARRAYFORMULA(SORT(IFNA(UNIQUE(VLOOKUP(F2:F,A2:B, {1,2}, 0)),""),1,0))
I have a .csv file with Twitter profiles including information such as username, name, description etc. One column is geolocation. In this text the user may have a country (i.e., UK), a city or town (i.e., Cambridge), an actual address (5 Tyrian Place, WR5 TY1), a state (i.e, California, CA) or something silly (i.e., West of Hell).
Is there an API/library/automatic way of taking this information and deriving the country? For example, if the location is Cambridge the output should be UK, if the address is in the UK, the output should be UK, etc.
Google has a reverse geocoding service which you can access through their Maps API:
https://developers.google.com/maps/documentation/geocoding/start
They let you make 2500 free requests per day. One nice feature is it will give you correct latitude, longitude, state, country, etc for things like "Golden Gate Bridge" and "The Big Apple." Twitter users enter all sorts of (sarcastic) phrases for their location -- like "West of Hell," "Mars," etc -- and Google will reverse geocode that as well. Though, that may not be very useful.
As another level of checking, you can compare the user's timezone ("utc_offset"), if it is present, to the place that Google returns. It's a bit involved and requires that you compare the timezone's latitude boundaries to the latitude and longitude in Google's response.
Predictions for addresses in the U.S. seem consistent. However when I get predictions for a U.K. address, I get inconsistent results. For example here are some results I receive:
* Pinewood Green, Iver, Buckinghamshire SL0 0QH, United Kingdom
* Berkshire, William Street, Windsor SL4 1AA, United Kingdom
The first one is Address, City, County, Postal code, Country
The second is County, Address, City, Postal code, Country
The county's position changes. I can find nothing in the response that would help me know from the response which field is what.
Additionally, with a response such as this
* 20 High Street, East Hoathly, East Sussex BN8 6EB, United Kingdom
how do I tell where the county stops and the postal code starts? Terms/Offsets?
Given a latitude and longitude, how do i get the localitites around, saying that i mean, say i am in can i get a dataset having names of major locations in neighbourhood, or some tourist spot near it?
say i am in paris and have the lat and long { lat : 48.8565, lng : 2.3509 }, // paris
could get some json/xml with stuffs like {"Eiffel Tower", "Arsenal"} etc.
I'm not up on non-USA sources of geo data, but the USGS (United States Geological Survey) publishes an official gazetter of place names including latitude and longitude of the primary point (for a city, typically city hall or similar) of that place.
http://geonames.usgs.gov/domestic/download_data.htm
I successfully used this data in the past by loading it into PostgreSQL and using it's geospatial query capability.