Does changing landing page URLs to a new domain affect Google Ads SEM performance? - google-ads-api

My company is facing an issue where we have to change our site's URLs at scale - close to 200k. These pages will be redirected to a new set of pages with a different domain name (owned by the same organisation).
The aforementioned pages are -
Transactional
Rank on organic search keywords and SEO activity is focussed on them
Used in our SEM campaigns as Landing Pages
The fundamental question is whether changing around 200 URLs in our Google Adwords account would affect the quality score at all? If yes, how much of a hit can we expect?

Maybe. Obviously, changing the URL will not affect ad copy or keywords (unless you're running dynamic search ads); that portion of the quality score will not change. If these landing pages are faster, delivery a better customer experience, and lead to higher-quality visits (longer time on page, lower bounce rate, better goal completion), then you will see your quality score improve. If not, it will fall.

Related

When web crawling how do I find patterns of low quality urls and crawl these type of urls less?

Urls like exmpl.com/search.php?q=hey come in huge varieties of GET parameters and I want to classify such links to keep my crawler from crawling such "low priority" urls.
It depends on what you're crawling and what you want to do with it, if it's a few specific websites or a broad crawl. Sometimes the owners of the websites don't want you to crawl those URL's either because they generate additional traffic (traffic which is useless for both), and they may use the robots.txt file also for that. Give it a look (you should respect it anyway).
These low quality URLs, as you call them, may also happen with:
e-shops where you continuously add items to the cart and the back office gets messed up with the orders
blog platforms where you click on comments, replies, likes and so on with a weird result
crawler traps from calendars or other infinite URLs where only the parameters change but the page is the same
link farms, especially classified ads websites where each product or region is generated as a subdomain and you end up having thousands of subdomains for the same website; even if you have a limited number of URLs downloaded per website, this order of magnitude just takes over your crawl
If you have your contact on the user agent they sometimes contact you to stop crawling specific types of URLs, or to tune with you what should be crawled or not and how (as, for instance, the number of requests per second).
So, it depends on what you're trying to crawl. Look at the frontier and try to find weird behaviours:
hundreds of URLs for the same website where the URL is mostly the same and only one or a few parameters change
hundreds of URLs for blog platforms or e-shops where the parameters looks weird or keep repeating (look at those platforms and try to find patterns in them like (.*\?widgetType=.*) or (.*\&action=buy_now.*))
URLs that look like being from calendars
hundreds of URLs that interact with forms that you're not interested in submitting information to (like the search you mentioned)
a number of URLs for a website that is too high for what you'd expect for that website or a website of that type
a number of subdomains for a website that is too high
websites with a high number of 403, 404 or 500 or non-200 codes and which URLs are responsible for that
a frontier that doesn't stop growing and which websites and URLs are responsible for that (which URLs are being added so much that make it grow weirdly)
All those URLs are good candidates to be excluded from the crawl. Identify the common part and use it as a regular expression in the exclusion rules.

I'm getting "Thin content with little or no added value"... I think I'm the exception?

I'm getting this message on google webmaster tools for my website YouGamePlay.com
"Thin content with little or no added value
This site appears to contain a significant percentage of low-quality or shallow pages which do not provide users with much added value (such as thin affiliate pages, cookie-cutter sites, doorway pages, automatically generated content, or copied content)."
The site was created to help promote authors of gameplay videos/channels. I'm using YouTube API to power the site. My site has comments, leaderboards, aids the user in locating similar videos and channels.
The site is NOT a cookie cutter site, because.... videos obtain a score, there are video leaderboards, channels, top players/viewers,comments etc.
Could someone explain or tell me why my site is being denied promotion via Google Webmaster Tools? Its very frustrating. Thank you.
I found this video useful: https://www.youtube.com/watch?v=w3-obcXkyA4. The home page could be treated as a "Doorway", try adding a description for each item.

From a development perspective, how does does the indeed.com URL structure and site work?

On the webmaster's Q and A site, I asked the following:
https://webmasters.stackexchange.com/questions/42730/how-does-indeed-com-make-it-to-the-top-of-every-single-search-for-every-single-c
But, I would like a little more information about this from a development perspective.
If you search Google for anything job related, for example, Gastonia Jobs (City + jobs), then, in addition to their search results dominating the first page of Google, you get a URL structure back that looks like this:
indeed.com/l-Gastonia,-NC-jobs.html
I am assumming that the L stands for location in the URL structure. If you do a search for an industry related job, or a job with a specific company name, you will get back something like the following (Microsoft jobs):
indeed.com/q-Microsoft-jobs.html
With just over 40,000 cities in the USA I thought, ok, maybe it's possible they looped through them and created a page for every single one. That would not be hard for a computer. But then obviously the site is dynamic as each of those pages has 10000s of results and paginated by 10. The q above obviously stands for query. The locations I can understand, but they cannot possibly have created a web page for every single query combination, could they?
Ok, it gets a tad weirder. I wanted to see if they had a sitemap, so I typed into Google "indeed.com sitemap.xml" I got the response:
indeed.com/q-Sitemap-xml-jobs.html
.. again, I searched for "indeed.com url structure" and, as I mentioned in the other post on webmasters, I got back:
indeed.com/q-change-url-structure-l-Arkansas.html
Is indeed.com somehow using programming to create a webpage on the fly based on my search input into google? If they are not, how are they able to have a static page for millions and millions and millions possible query combinations, have them dynamically paginate, and then have all of those dominate google's first page of results (albeit that very last question may be best for the webmasters QA)?
Does the javascript in the page somehow interact with the URL
It's most likely not a bunch of pages. The "actual" page might be http://indeed.com/?referrer=google&searchterm=jobs%20in%20washington. The site then cleverly produces a human readable URL using URL rewrite, fetches jobs in the database that matches the query, and voĆ­la...
I could be dead wrong of course. Truth be told, the technical aspect of it can probably be solved in a multitude of ways. Every time a job is added to the site, all pages that need to be done to match that job, might be created, thus producing an enormous amount of pages for Google to crawl.
This is a great question however remains unanswered on the ground that a basic Google search using,
ste:indeed.com
returns over 120MM results and secondly a query such as, "product manager new york" ranks #1 in results. These pages are obviously pre-generated which is confirmed by the fact the page is cached by the search engine (sometimes several days before) has different results from a live query on the site.
Easy when Googles search bot crawls the pages on indeed or any other job search site those page are dynamically created. Here is another site: http://jobuzu.co.uk i run this which is similar to how indeed works.
PHP is your friend in this and Indeed don't just use standard databases look into Sphinx and Solr as they offer Full text search for better performance then MySql etc.
They also make clever use of rel="canonical" and thorough internal linking:
http://www.indeed.com/find-jobs.jsp
Notice that all the pages that actually rank can be found from that direct internal link structure.

What is business model of URL shortening services like bit.ly and tinurl?

How do URL shortening services earn ? Do they get funded over a period of time and then, come out with algorithms to then dish out category specific trending hot URL's or something ? Any insight into this ?
The value of a URL shortener is the statistics behind it. If you have a niche or significant volume for a general shortening service, you can find out what domains are popular or possibly targets if you analyzed the content of popular sites. You can then use this to target your ads on various networks or perhaps you can find a marketing firm who would be interested in the data.
The other way you make money off URL shortening is branding. This works if you want to establish yourself as an authority in a niche; this is especially useful on social networks. Even if you don't link to sites on your own network, you can look at your shortener stats and see if the content you are posting is being reached by your fans. You can find out which links on social networks, different facebook pages, different times of the day on twitter, give you the most return. Also, if you purchase a short domain or have a short domain you can reinforce your brand when people retweet your message. People might not click on every link on Twitter or Facebook, but the more often they see your brand, the more likely they are to remember it in the future.
In the year 2011+ most of the social networks and posting services have their own de facto shorteners, so you likely would not be able to compete without a lot of money being put in, and you would need to spend a lot of time making the marketing connections and finding buyers for your data. I don't think you can make too much money starting up, but the internal statistics you can gain from using your own shortener might help with branding and intelligence/planning, which is harder to quantify in later dollar gains. The social networking insights are were I get most of the value with my URL shortener.

Pagerank of different pages - new twitter?

How new twitter manage to make every page of users has 9 pagerank? In the old one each page was perceived as a different page and therefore has different pageranks.
They've redirected http://twitter.com/ceejayoz to http://twitter.com/#!/ceejayoz, which is actually just http://twitter.com/ to a search engine. There are indexable pages behind the scenes (URLs like http://twitter.com/?_escaped_fragment_=/ceejayoz), but you don't see them.
The intent isn't to game PageRank, that's just a side effect of the AJAX URLs they're using.

Resources