I'm getting "Thin content with little or no added value"... I think I'm the exception? - youtube-api

I'm getting this message on google webmaster tools for my website YouGamePlay.com
"Thin content with little or no added value
This site appears to contain a significant percentage of low-quality or shallow pages which do not provide users with much added value (such as thin affiliate pages, cookie-cutter sites, doorway pages, automatically generated content, or copied content)."
The site was created to help promote authors of gameplay videos/channels. I'm using YouTube API to power the site. My site has comments, leaderboards, aids the user in locating similar videos and channels.
The site is NOT a cookie cutter site, because.... videos obtain a score, there are video leaderboards, channels, top players/viewers,comments etc.
Could someone explain or tell me why my site is being denied promotion via Google Webmaster Tools? Its very frustrating. Thank you.

I found this video useful: https://www.youtube.com/watch?v=w3-obcXkyA4. The home page could be treated as a "Doorway", try adding a description for each item.

Related

Referral Traffic

I run a website that contains links to other partner sites. According to my partners, no traffic comes through the links on my site. As a result, I created an artificial affiliate website myself and actually it was not possible to capture the links that go to the artificial website on my website. I already searched the source code and could not find any meta information like "no-referrer". Does anyone have any idea what else it could be?
Thanks in advance for your ideas!

Get rid of old links to a retired website in Google search

I have a website that has been replaced by another website with a different domain name.
In Google search, I am able to find links to the pages on the old site, and I hope they will not show up in future Google search.
Here is what I did, but I am not sure whether it is correct or enough.
Access to any page on the old website will be immediately redirected to the homepage of the new website. There is no one-to-one page mapping between the two sites. Here is the code for the redirect on the old website:
<meta http-equiv="refresh" content="0;url=http://example.com" >
I went to Google Webmasters site. For the old website, I went to Fetch as Google, clicked "Fetch and Render" and "Reindex".
Really appreciate any input.
A few things you'll want to do here:
You need to use permanent server redirects, not meta refresh. Also I suggest you do provide one-to-one page mapping. It's a better user experience, and large numbers of redirects to root are often interpreted as soft 404s. Consult Google's guide to site migrations for more details.
Rather than Fetch & Render, use Google Search Console's (Webmaster Tools) Change of Address tool. Bing have a similar tool.
A common mistake is blocking crawler access to an retired site. That has the opposite of the intended effect: old URLs need to be accessible to search engines for the redirects to be "seen".

How do search engines obtain unlinked pages?

I noticed that quite a lot Dropbox pages are indexed by Google, Bing, etc. and was wondering how these search engines obtain for instance links like these:
https://dl.dropboxusercontent.com/s/85cdji4d5pl5qym/37-71.pdf
https://dl.dropboxusercontent.com/u/11421929/larin2014.pdf
Given that there are no links on dl.dropboxusercontent.com to follow and the path structure is not that easy to guess, how is it possible that a search engine obtains such a link?
One solution might be that it was posted on a forum and picked up by the search engine but I looked up quite a lot of the links and checked the backlinks without success. I also noticed that Bing and Yahoo show a considerable amount of more results than Google which would mean that Bing does a better job in picking up these links which seems unlikely to me.
Even if the document is really unlinked (no link on their site, no link on someone other’s site, no sitemap, no Referer log from a site that gets linked in the document, etc.), it’s still possible for search engines to find the link.
Two ways are:
Someone could submit the URL to a search engine (whether via a public tool, or via the site’s webmaster account).
The search engine could get all URLs that certain users visit in their browsers. This could, for example, happen when the user has installed a toolbar from that search engine. This is the case with Bing, see my related answer on Webmasters SE:
Microsoft has confirmed that they do discover and index URLs that they find through users surfing the Internet with the Bing Toolbar installed.
And there might be more ways, of course.

Patterns between YouTube m. and normal site urls

My site is not able to show uploaded youtube videos when the url is a mobile (m.) site, but it works for the normal youtube site. It seems to me that the mobile and normal urls differ in a pattern, as shown below:
http://www.youtube.com/watch?v=5ILbPFSc4_4
http://m.youtube.com/#/watch?v=5ILbPFSc4_4&desktop_uri=%2Fwatch%3Fv%3D5ILbPFSc4_4
obviously, the m. is added, as is the /#, and all the &desktop_uri... stuff.
and again:
http://www.youtube.com/watch?v=8To-6VIJZRE
http://m.youtube.com/#/watch?v=9To-6VIJZRE&desktop_uri=%2Fwatch%3Fv%3D8To-6VIJZRE
What we hope to do is check to see if the url is mobile site, and if it is, parse it so it shows as the normal site.
Does any one know if all youtube urls work this way--if this similar pattern works for all the same videos on mobile and normal sites?
In general, any time you attempt to parse URLs for sites (as opposed to web APIs) by hand, you're leaving yourself open to breakage. There's no "contract" in place that states that a common format will always be used for watch page URLs on the mobile site, or on the desktop site.
The oEmbed service is what you should use whenever you want to take a YouTube watch page URL as input and get information about the underlying video resource as output in a programmatic fashion. That being said, the oEmbed response doesn't include a canonical link to the desktop YouTube watch page, so it's not going to give you exactly what you want in this case. For many use cases, such as when you want to get the embed code for a video given its watch page URL, it's the right choice.
If you do code something by hand, please ensure that your code is deployed somewhere where it would be easy to update if the format of the watch pages ever do change.

Tracking Page Popularity in a Time Frame in Rails

I'm very new to web development and this seems like a basic question, so perhaps I just lack the correct terminology to search it on Google.
On my site I plan to have many dynamically generated pages, based off data in a MySQL server, and I would like to know which ones people have been visiting the most, in say, the last 24 hours, so that I can place these most popular page on the front page of the site. How would I/would I be able to accomplish this in a Rails application.
What you're looking for is a web analytics solution to analyze your traffic, and possibly your marketing effectiveness. Here are some of the most prominent services you could use with your website:
Google Analytics
Chartbeat
Reinvigorate
HaveAMint
GetClicky
Piwik
Woopra
Personally, I use Google Analytics as its setup is darn simple: configure your account, add a Javascript snippet on each of the pages you want to track, and you're done.
You could also look out for web analytics software that you would host. All in all, take a look at this Wikipedia page for more information.

Resources