I built a website and I have a textfield where the user puts text to search on Google. When the user selects "Search", I want to open new tabs with the first 5 results of the Google search.
Is there any sort of URL parameter that Google provides to do this? For example, the second result of a specific search-phrase?
I haven't seen any way to do this.
You can do this only for the first result by adding btnI=1 to your URL. For example, http://google.com/search?btnI=1&q=rtf will take you to the first result (which is like clicking "I'm feeling Lucky!").
But if you think about it, opening up 5 unknown sites is a bad idea. What if one of those sites is a phishing site. Or it has some malware that will run on the user's machine?
When I look through Google's results, I only click on "reputable" sites. I think your idea is a bit risky. I would probably never use a service like this.
Related
This is the website: https://www.thefork.com/search/?cityId=326512&promotionOnly=true
I want a list of all the restaurants names, including the % off, website and address.
I tried all sorts of things, nothing worked. So I thought to make it even broader like so:
=importxml("https://www.thefork.com/search/?cityId=326512&promotionOnly=true","//h2/*") but nothing (If I'm correct, this says to take everything between<h 2>< /h2> right?
Then I tried to only take the websites with:
=importxml("https://www.thefork.com/search/?cityId=326512&promotionOnly=true","//a#href/*") if I'm correct? nothing.....
So now I'm reading up on this formula, but I haven't been able to narrow down my search to what I want to achieve. I first thought that maybe it's not possible because there is an option to login to the website, but then... this page is public and no need to logon.
I have a range of Google docs that are publicly viewable, but I would like to get some information about how often they are being viewed. I understand that there used to be a way of doing this with Google Analytics, but now that has been removed.
It seems to me that I have two main options, one of which is to make all my doc links point to a page which redirects according to a query string parameter, e.g.:
http://myurl.net?page=1 # Sends you to one page and logs the visit
http://myurl.net?page=2 # Sends you to another page and logs the visit
Or alternatively, I could try to embed some code in each doc that makes a call back to the server with its page number. But I don't know if this is possible.
The first option looks like it should be fairly easy, but I don't see how to redirect the client.
Could anyone give me some ideas about how to do this? It seems it would be useful for quite a lot of people.
Many thanks.
Justin.
On the webmaster's Q and A site, I asked the following:
https://webmasters.stackexchange.com/questions/42730/how-does-indeed-com-make-it-to-the-top-of-every-single-search-for-every-single-c
But, I would like a little more information about this from a development perspective.
If you search Google for anything job related, for example, Gastonia Jobs (City + jobs), then, in addition to their search results dominating the first page of Google, you get a URL structure back that looks like this:
indeed.com/l-Gastonia,-NC-jobs.html
I am assumming that the L stands for location in the URL structure. If you do a search for an industry related job, or a job with a specific company name, you will get back something like the following (Microsoft jobs):
indeed.com/q-Microsoft-jobs.html
With just over 40,000 cities in the USA I thought, ok, maybe it's possible they looped through them and created a page for every single one. That would not be hard for a computer. But then obviously the site is dynamic as each of those pages has 10000s of results and paginated by 10. The q above obviously stands for query. The locations I can understand, but they cannot possibly have created a web page for every single query combination, could they?
Ok, it gets a tad weirder. I wanted to see if they had a sitemap, so I typed into Google "indeed.com sitemap.xml" I got the response:
indeed.com/q-Sitemap-xml-jobs.html
.. again, I searched for "indeed.com url structure" and, as I mentioned in the other post on webmasters, I got back:
indeed.com/q-change-url-structure-l-Arkansas.html
Is indeed.com somehow using programming to create a webpage on the fly based on my search input into google? If they are not, how are they able to have a static page for millions and millions and millions possible query combinations, have them dynamically paginate, and then have all of those dominate google's first page of results (albeit that very last question may be best for the webmasters QA)?
Does the javascript in the page somehow interact with the URL
It's most likely not a bunch of pages. The "actual" page might be http://indeed.com/?referrer=google&searchterm=jobs%20in%20washington. The site then cleverly produces a human readable URL using URL rewrite, fetches jobs in the database that matches the query, and voĆla...
I could be dead wrong of course. Truth be told, the technical aspect of it can probably be solved in a multitude of ways. Every time a job is added to the site, all pages that need to be done to match that job, might be created, thus producing an enormous amount of pages for Google to crawl.
This is a great question however remains unanswered on the ground that a basic Google search using,
ste:indeed.com
returns over 120MM results and secondly a query such as, "product manager new york" ranks #1 in results. These pages are obviously pre-generated which is confirmed by the fact the page is cached by the search engine (sometimes several days before) has different results from a live query on the site.
Easy when Googles search bot crawls the pages on indeed or any other job search site those page are dynamically created. Here is another site: http://jobuzu.co.uk i run this which is similar to how indeed works.
PHP is your friend in this and Indeed don't just use standard databases look into Sphinx and Solr as they offer Full text search for better performance then MySql etc.
They also make clever use of rel="canonical" and thorough internal linking:
http://www.indeed.com/find-jobs.jsp
Notice that all the pages that actually rank can be found from that direct internal link structure.
Basically I want to know how many people have tweeted a link to a url, but since there are dozens of link shortener out there I don't see any way to do this without having access to all of their url maps. I found a previous question here but it was over a year old and didn't have any new answers.
So #1, does anyone know of a service/API for doing this?
And #2, can anyone think of a way to accomplish this task other than submitting the long url in question to all the popular link shortening sites?
ps- I'm also open to comments about why this is impossible or impractical.
You could perform a Google search (or the equivalent via API) for any pages that link to your page. This is done with the link: keyword. So if you're trying to figure out how many people link to www.example.com (regardless of whether it's through a link shortner URL), then you would just do a Google search for link:www.example.com.
e.g.: http://www.google.com/search?q=link:www.example.com
Note that this will only find pages that have been indexed, so pages that haven't been crawled, or pages that get crawled infrequently, will not show up in the results until a later date (if at all).
Since all sites have different algorithms for shortening the URLs, and these are different sites that most likely do not share their data with each other, how can you hope to find all of them in a single or small number of queries?
All you can do is brute-force it, and even then this might not be any good if a site is content to create a new value for the same long-form URL (especially if you send a different long-form URL that maps to the same place, like http://www.stackoverflow.com/ rather than http://stackoverflow.com/).
In order to really get this to work, there would have to be a site that ALREADY automatically collects all of this information from every site, which the URL shortening sites voluntarily call. And even if you wrote such a site, that doesn't account for the URL-shortening sites already out there who already have data!
In short, I do not see how this is remotely possible, unless I'm wrong about there being such a database somewhere out there.
So months after asking this question I came across a solution to a similar question, that is how to tell how many times a link has been shared on facebook. The solution, via a simple new API call:
http://graph.facebook.com/http://stackoverflow.com
returns the following json data:
{
"id": "http://stackoverflow.com",
"shares": 1627
}
Is there a way to automatically append an identifier to a page URL when it is bookmarked in the browser, perhaps something in the document head that gives the browser a directive or an onBookmark JavaScript type of event? I'm looking for ways to further segment my direct traffic in Google Analytics (if you have other ideas for doing that not related to bookmarks, please share them as well).
Example:
http://www.example.com/article
When bookmarked becomes:
http://www.example.com/article#bookmarked
I don't think you can reliably do what you're describing. The best you could do is have a button on your pages that uses window.external.AddFavorite, and then specify your hashed address. It will work for the people who use the button, which may be better than nothing.
Keep in mind that bookmark traffic is not simply part of your direct traffic.
There is a nice demonstration in Justin Cutroni's blog how Google Analytics tracks bookmark visits and it turned out, that most of the time, bookmark visits are shown as organic (if you suppose, that people first do a Google search and then bookmark your site).