How can I help search engine robots index geolocation results? - search-engine

One of my sites has a few public pages that serve results based on the location the person is searching. Similar to going to a weather website and being able to search for your local weather. However for this, it is more about events in that vicinity and some other result types.
It seems from looking through a few index reports that these results are either completely ignored or only indexing one from the location reported by the bot.
Is there something I can/should do on the site or in the robots.txt file to help fix this?
I have thought about trying to detect a bot and then having it return all results, but I am concerned with the time. Each result has its own page, but we use GUIDs for Ids, so I cannot think of a way to have it just index the direct pages either? Would a separate page linking to all results in a list be helpful?
Thanks, tried Googling and asking friends and got nowhere.

Related

Google Analytics filters. Subsetting results to specific URLs with no parameters or question marks

I seem to not being lucky searching for an understandable anwer in this forum, so I decided to make my own question. I apoplogyze for any existing post that I could have missed.
Briefly, I want to know statistics from certain pages that I can address by setting a filter accounting the URL. The problem is that I can also found some visits that were made while administering the site (joomla) which show some queries.
I would like to get results from pages under, let's say, /index.php/certain_group/
(e.g.
/index.php/certain_group/this-page,
/index.php/certain/group/another-page)*
but not those like
/index.php/certain/group/another-page?view=form&layout=edit&a_id=89&return=aHR0cCUzQSUyR...bla bla
I have tried lost of combinations in http://www.analyticsmarket.com/freetools/regex-tester
I am being able to find only thos that I do not want, I mean, if I use "/index.php/group/.\?.$"
I get
/index.php/certain/group/another-page?view=form&layout=edit&a_id=89&return=aHR0cCUzQSUyR...bla bla
Any clue?
Thanks in advance

Facebook Search in Graph API

I'm developing an iOS application that let the user to search for a person throught the Graph API.
What I want is the SAME behavior that it's present on the Facebook website. You know when you begin to search for a person in the top text input? The first results will be mostly your friends AND some people you MAY know or people you already looked for.
The problem? Try to use the same search pattern here to search a person: Graph Api Explorer
The Graph Api returns DIFFERENT results than the search input on the Facebook website.
Does anyone knows why? Is there a way to achieve the same results?
Facebook are using many algorithms to display search result like Relevance Indicators, Complexities of User-Centric Search and The Product.
One of the algorithm to display result on their page as below.
Personal Context:
Unlike most search engines, every Facebook search involves two key elements - a query and a querier.
Just as we need to understand the query, it’s as essential to understand the person behind the query.
People are more likely to be looking for things located in their own city/country or for people who share the same college/workplace.
We consider this information and much more when ranking results. The more we know about you, the better your search results will be.
In Graph API, they are not using this algorithm.They are just displaying the queried result. Hence you can not achieve same result using graph search API.
To achieve this you can use following apporach -
Get the friend list of user using me/friends?limit=1&offset=1
Get the user list using search api
merge both the result
show result(s) to user
For more information(approach/algorithm) you can check Intro to Facebook Search
Is there a way to achieve the same results? - NO
Does anyone knows why? - NOT REALLY
(Edit: Seems in another answer, someone does actually, but it doesn't change the answer for "If you can achieve it")
But its safe to presume that Facebook does not allow all functionality through the API, why would they after all ? They need to keep the people coming to their own platform. So I can't give you a straight forward response on WHY, but IF ? Not possible, there is zero documentation about more specified search for type user. When you request user friends, you will only get the user friends who are using the same app starting v2.0
Am afraid that you will have to drop the functionality you want to achieve.
It is not just the graph search. When you refresh your TimeLine. The order of posts gets changed every time because Facebook takes a Pull on Demand approach. Which means whenever you login, the data from your friends is fetched. Which is why facebook has a limit to maximum number of friends.
Talking about the Graph search and Graph API. They are not same and the Graph Search cannot be accessed through the Graph API. So, you would have to change your approach.
To explain why the graph search gives different results on same search term. I would guess that it follows the game Pull on Demand model ( although it is not open and we cannot know for sure ). Following that model makes sense though.
Thanks

From a development perspective, how does does the indeed.com URL structure and site work?

On the webmaster's Q and A site, I asked the following:
https://webmasters.stackexchange.com/questions/42730/how-does-indeed-com-make-it-to-the-top-of-every-single-search-for-every-single-c
But, I would like a little more information about this from a development perspective.
If you search Google for anything job related, for example, Gastonia Jobs (City + jobs), then, in addition to their search results dominating the first page of Google, you get a URL structure back that looks like this:
indeed.com/l-Gastonia,-NC-jobs.html
I am assumming that the L stands for location in the URL structure. If you do a search for an industry related job, or a job with a specific company name, you will get back something like the following (Microsoft jobs):
indeed.com/q-Microsoft-jobs.html
With just over 40,000 cities in the USA I thought, ok, maybe it's possible they looped through them and created a page for every single one. That would not be hard for a computer. But then obviously the site is dynamic as each of those pages has 10000s of results and paginated by 10. The q above obviously stands for query. The locations I can understand, but they cannot possibly have created a web page for every single query combination, could they?
Ok, it gets a tad weirder. I wanted to see if they had a sitemap, so I typed into Google "indeed.com sitemap.xml" I got the response:
indeed.com/q-Sitemap-xml-jobs.html
.. again, I searched for "indeed.com url structure" and, as I mentioned in the other post on webmasters, I got back:
indeed.com/q-change-url-structure-l-Arkansas.html
Is indeed.com somehow using programming to create a webpage on the fly based on my search input into google? If they are not, how are they able to have a static page for millions and millions and millions possible query combinations, have them dynamically paginate, and then have all of those dominate google's first page of results (albeit that very last question may be best for the webmasters QA)?
Does the javascript in the page somehow interact with the URL
It's most likely not a bunch of pages. The "actual" page might be http://indeed.com/?referrer=google&searchterm=jobs%20in%20washington. The site then cleverly produces a human readable URL using URL rewrite, fetches jobs in the database that matches the query, and voíla...
I could be dead wrong of course. Truth be told, the technical aspect of it can probably be solved in a multitude of ways. Every time a job is added to the site, all pages that need to be done to match that job, might be created, thus producing an enormous amount of pages for Google to crawl.
This is a great question however remains unanswered on the ground that a basic Google search using,
ste:indeed.com
returns over 120MM results and secondly a query such as, "product manager new york" ranks #1 in results. These pages are obviously pre-generated which is confirmed by the fact the page is cached by the search engine (sometimes several days before) has different results from a live query on the site.
Easy when Googles search bot crawls the pages on indeed or any other job search site those page are dynamically created. Here is another site: http://jobuzu.co.uk i run this which is similar to how indeed works.
PHP is your friend in this and Indeed don't just use standard databases look into Sphinx and Solr as they offer Full text search for better performance then MySql etc.
They also make clever use of rel="canonical" and thorough internal linking:
http://www.indeed.com/find-jobs.jsp
Notice that all the pages that actually rank can be found from that direct internal link structure.

How would I find all the short urls that link to a particular long url?

Basically I want to know how many people have tweeted a link to a url, but since there are dozens of link shortener out there I don't see any way to do this without having access to all of their url maps. I found a previous question here but it was over a year old and didn't have any new answers.
So #1, does anyone know of a service/API for doing this?
And #2, can anyone think of a way to accomplish this task other than submitting the long url in question to all the popular link shortening sites?
ps- I'm also open to comments about why this is impossible or impractical.
You could perform a Google search (or the equivalent via API) for any pages that link to your page. This is done with the link: keyword. So if you're trying to figure out how many people link to www.example.com (regardless of whether it's through a link shortner URL), then you would just do a Google search for link:www.example.com.
e.g.: http://www.google.com/search?q=link:www.example.com
Note that this will only find pages that have been indexed, so pages that haven't been crawled, or pages that get crawled infrequently, will not show up in the results until a later date (if at all).
Since all sites have different algorithms for shortening the URLs, and these are different sites that most likely do not share their data with each other, how can you hope to find all of them in a single or small number of queries?
All you can do is brute-force it, and even then this might not be any good if a site is content to create a new value for the same long-form URL (especially if you send a different long-form URL that maps to the same place, like http://www.stackoverflow.com/ rather than http://stackoverflow.com/).
In order to really get this to work, there would have to be a site that ALREADY automatically collects all of this information from every site, which the URL shortening sites voluntarily call. And even if you wrote such a site, that doesn't account for the URL-shortening sites already out there who already have data!
In short, I do not see how this is remotely possible, unless I'm wrong about there being such a database somewhere out there.
So months after asking this question I came across a solution to a similar question, that is how to tell how many times a link has been shared on facebook. The solution, via a simple new API call:
http://graph.facebook.com/http://stackoverflow.com
returns the following json data:
{
"id": "http://stackoverflow.com",
"shares": 1627
}

How-To get private pages being crawled by google

How can i get private pages of my web site being crawled and indexed by google ?
maybe it's not very "conventionnal", but i want my private page "links" displayed in google index, but next require a registration to display the page.
EDIT: Based on the addition of "maybe it's not very "conventionnal", but i want my private page "links" displayed in google index, but next require a registration to display the page." To the question:
You can check the User Agent in your php code to basically allow google to see pages if it was a registered user (google's user agent is "Googlebot/1.0" and you can search to find user agents for other common engines).
However, this behavior is specifically against google's rules and they can and will remove your site from the index if they catch you doing it. Their policy is you should not treat googlebot any differently than you treat any random person who visits your site.
(Original Answer) One way is to use a sitemap to show google how to find all of your pages.
In general, and even in the case of sitemaps, if the content you want indexed is not linked to from a page that can be found through the "root" (/) (i.e. there is no way for the public to find it), then it probably won't get indexed. The only way to get it indexed is to link it in someplace.
The question is though, why do you want your private pages in google anyway?
They'll get crawled if and only if they're publicly accessible and your robots.txt file allows it. That's pretty much all you need to do.
Are you asking how to get Google to index your pages?
There are a couple of ways. You need to ensure that you have SEO'd, or Search Engine Optimisation, the pages properly with title text and description key words in your meta data.
You can also submit your site to Google, it's a free service, and it'll be placed in a queue of things that Google will index. May take some time though.
By far the best way to get your pages indexed is using the meta data in the pages themselves.
Google will only index what is
linked from somewhere already in Google's index
accessible to its crawler via normal (unauthenticated) HTTP
It will also
make the contents available in search results to anyone.
This may conflict with your idea of a "private" page.
I'm going to assume that all the other previous answerers are misunderstanding you. As I read it, you aren't asking how to get Google to index your pages, but rather how to get a list of all the pages that Google currently has already indexed on your site? If that is true, you should have a look at the Google Webmaster Tools.

Resources