I made a site which can be browsed both as non-deep linked version for non-javascript users (somesite.com?state) and a deep-link version for javascript-enabled users (somesite.com#state). Note: both of these versions give the same content except in one version, the content is populated by PHP, and in other by javascript.
It works perfectly however when a javascript-enabled user browses the site and wants to share a link on Facebook such as (somesite.com#someotherstate), the Facebook can't parse the proper content of the page since it can't deal with hash parameters for deep-linking.
So, other then putting a separate "Share" button on the page which will give explicitly a non-deep linked version of the URL (somesite.com?someotherstate) for the user to copy and share on Facebook, how does the industry go about with this issue?
UPDATE
I noticed that Facebook implemented Google Ajax methodology. Can't find official statement though.
Related
I have a website that has been replaced by another website with a different domain name.
In Google search, I am able to find links to the pages on the old site, and I hope they will not show up in future Google search.
Here is what I did, but I am not sure whether it is correct or enough.
Access to any page on the old website will be immediately redirected to the homepage of the new website. There is no one-to-one page mapping between the two sites. Here is the code for the redirect on the old website:
<meta http-equiv="refresh" content="0;url=http://example.com" >
I went to Google Webmasters site. For the old website, I went to Fetch as Google, clicked "Fetch and Render" and "Reindex".
Really appreciate any input.
A few things you'll want to do here:
You need to use permanent server redirects, not meta refresh. Also I suggest you do provide one-to-one page mapping. It's a better user experience, and large numbers of redirects to root are often interpreted as soft 404s. Consult Google's guide to site migrations for more details.
Rather than Fetch & Render, use Google Search Console's (Webmaster Tools) Change of Address tool. Bing have a similar tool.
A common mistake is blocking crawler access to an retired site. That has the opposite of the intended effect: old URLs need to be accessible to search engines for the redirects to be "seen".
I recently decided to start to take advantage of rich snippets to improve my personal website's content for the search engines and, IMHO most importantly, the site readers – hi, Mam! ;-). One of these are Google Authorship. Personally, I think the idea behind Google Authorship is a sound one: it helps to brings a sense of identity, personality and – arguably, most importantly – credibility to what is still largely an anonymous web.
Normally, I would link my article to Google Authorship using the following line of HTML:
<A REL="author" HREF="https://plus.google.com/112431363835029530079?rel=author">Jordan Clark</A>
However, in the instance of a website that publishes articles that are written by multiple authors, manually entering each another’s Google+ UID string starts to become a tiresome process.
Is is valid to do the following:
(a) Link to the author like so, using the script "author.php" (or other type of server-side script).
<A REL="author" HREF="/author.php?by=Alice&rel=author/[UID]?rel=author">Alice</A>
(b) The file "author.php" scripts simply do a quick check for Alice's (or whoever) User ID string provided by Google, and then uses a simple HTTP redirect header to pass this data to Google.
What I would like to know is:
Is it okay to use a local script to redirect to your Google+ user profile? (i.e. will it affect the PageRank of already indexed page or have any other unforeseen negative effects on new and indexed pages?)
Why do I not see more people linking with Google’s “prettified” version:
http://profiles.google.com/clarky.y2k?rel=author
Are there any drawbacks to using the “prettified” version of this method?
Ideally, I would like to use the intermediate PHP script, as I have already described above (see part 1). However, any tips, suggestions or other ways you may have implemented on your websites are very welcome!
For item (1), you can maintain your own app's profiles (author.php in your case) for your authors. On your own app's profile page (author.php), you would add a link from that page to Google and specify the rel="me" attribute on that link. So Alice's profile page might say something like "Find Alice on Google+.
This indirect authorship linking is supported. You also will need the link from Alice's Google+ profile that lists her as a contributor to your site. Once the linking is setup in both directions, authorship can start to show up. Authorship won't always display in all cases and can take some time for it to start appearing as Google would need to reindex your pages.
For item (2), I don't think the profiles URL will enable authorship. Some people use that URL as a vanity URL, but as far as I know it isn't supported for use with things like authorship, badges, etc.
You should test if your redirects are followed using the Rich Snippets Testing Tool: http://www.google.com/webmasters/tools/richsnippets
rel="author" is no longer supported.
I'd like to use iOS to post on my users's facebook walls/tickers/news feeds. I learned that opengraph can be very specific about the actions users take inside my app, and I'd like to integrate them into my project.
I think I realize now I am going to need my own server running for opengraph actions to work ,right? or is this not a must? from what I understand, the server supplies the basic data to facebook for the post, like image, main text, secondary text etc...
Is my server needed just to supply the facebook posts' data? Is my server called everytime a facebook page is loaded with my app's contents? Or is it done only once, and facebook is copying the posts' content into facebook's servers?
What happens if my servers is not responsive etc?
The short answer: yes, you probably need a server.
The longer answer:
The facebook documentation on Open Graph is much better than what I can fit here. If you have not already, check out this page and its links: https://developers.facebook.com/docs/opengraph/.
A published action on facebook is a tuple { user, action, object }. The types of actions and objects are defined in the facebook developer application (developers.facebook.com/apps).
The content of the post is generated by your iOS client. The post has data that references the action by name and the object by its URL.
The individual objects that your app defines are typically represented by pages on your web server. These pages are scraped by Facebook to extract metadata that defines the object, including images and text. I do not know of safe assumptions you can make about when the object's page will be scraped.
It is possible to create sample objects when you are editing your object types (developers.facebook.com/apps, create or edit one of your apps, "Edit Open Graph", "Add Sample Data"). However, because these are intended for experimentation, they are fairly limited in what you can do with them.
I am creating an ios app that needs to download a html page and extract some information from it. To get to the page I also need to login. I have looked everywhere for some code on how to login to a site using the cocoa framework, but every answer I see only seems to answer half the question. Here is the login site: romres.ist-asp.com. I need some code for writing something in the first field (the other two are left blank), then submit the form and then I need to be able to see the next page. I believe apps like Facebook should use som of the same technology, where you log in to a facebook and then you can see the contents of your profile.
Basically what you want to do is called scraping.
Scraping is really easy for sites that don't require authentication, but in your case what you should do is to inspect the POST request being made when logging in the site your interested in (try to understand of the service respond) and the POST request made, when already logged in, to retrieve each page.
The purpose of all of this is to have later the possibility to simulate regular HTTP requests that should came from a browser via code.
If you have any doubt ask in the comments.
Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name?
As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.
EDIT: I guess I didn't really ask what I wanted to know about. Do I need to be as big as twitter or facebook in order for google to make special ways to crawl my site? Will google automatically find my users profiles if I allow anyone to view them? If not what do I have to do to make that work?
In the case of tweets in particular, Google isn't 'crawling' for them in the traditional sense; they've integrated with Twitter to provide the search results in real-time.
In the more general case of your question, dynamic content is not new to Facebook or Twitter, though it may seem to be. Google crawls a URL; the URL provides HTML data; Google indexes it. Whether it's a dynamic query that's rendering the page, or whether it's a cache of static HTML, makes little difference to the indexing process in theory. In practice, there's a lot more to it (see Michael B's comment below.)
And see Vartec's succinct post on how Google might find all those public Facebook profiles without actually logging in and poking around FB.
OK, that was vastly oversimplified, but let's see what else people have to say..
As far as I know Google isn't able to read and store the actual contents of profiles, because the Google bot doesn't have a Facebook account, and it would be a huge privacy breach.
The bot works by hitting facebook.com and then following every link it can find. Whatever content it sees on the page it hits, it stores. So even if it follows a dynamic url like www.facebook.com/username, it will just remember whatever it saw when it went there. Hopefully in that particular case, it isn't all the private data of said user.
Additionally, facebook can and does provide special instructions that search bots can follow, so that google results don't include a bunch of login pages.
profiles can be linked from outside;
site may provide sitemap