Search Engine for a blog_website(searching inside links ) - search-engine

I created a very basic search option for my blog, and as per topics and key words it is generating results but what i am looking for is in certain articles i have to add links so if my search can go through those links that are basically external websites for example if i am referring to someone else blog for more information then search to find from that.Is it possible ? And i don't want to go for GCSE.
Thanks in advance. It will be of great help.
Thanks again.

Yes, it is possible to write a bot to crawl external websites from links. I've made one. It crawled 100K+ website URLs. So yes, it is possible to make one, which can crawl links from your blog.
To create a search engine, you'll need to know some internals regarding how they work...
Search Bots work like this:
Crawler fetches pages. This step is pretty easy, as it uses curl.
Parser splits the HTML into pieces, so that data can be extracted from the page. This has 2 sub-components to it, which...
a. Extracts any data from the page that you want to capture & then saves that data into a database.
b. Extracts links & places them back into the crawling queue. This creates an infinite loop, so your bot never stops crawling... (Unless someone else's malformed URL crashes it, which happens a lot. So be ready to frequently fix it.)
Indexer creates lookup indexes, which map keywords to the web page's contents. This has 2 sub-components to it, as it...
a. Creates a Forward Index, which maps each document to keywords that are inside of that document.
doc1 | bird, aviary, robin, dove, blue jay, cardinal
doc2 | birds, bird watching, binoculars
doc3 | cats, eat, birds
doc4 | cats, generally, don't, like, water, nor, neighborhood, dogs
doc5 | dog, shows, look, fun
b. Creates an Inverted Index from the Forward Index, which reverses the indices. This allows users to search by keyword & then the search script looks up & suggests which documents, that users may want to view. Like so...
bird | doc1, doc2
cat | doc3, doc4
dog | doc4, doc5
Search Forms work like this:
Search Form shows the HTML input box to the user.
Search Script will search the Inverted Index to find which document links to display in the Search Engine Results Page.
Search Engine Results Page (yes, SERP is an actual industry acronym for Search Engine Results Page). This displays the list of search result links. You can style it any way that you'd like & it doesn't have to look like Google's, Microsoft's Bing nor Yahoo's engines.
Examples:
Searching for:
"bird" returns links to "doc1, doc2"
"cat" returns links to "doc3, doc4"
"dog" returns links to "doc4, doc5"
Good luck building your search engine for your blog!

Related

How do i trace multiple XML elements with same name & without any id?

I am trying to scrape a website for financials of Indian companies as a side project & put it in Google Sheets using XPATH
Link: https://ticker.finology.in/company/AFFLE
I am able to extract data from elements that have a specific id like cash, net debt, etc. however I am stuck with extracting data for labels like Sales Growth.
tried
Copying the full xpath from console, //*[#id="mainContent_updAddRatios"]/div[13]/p/span - this works, however, i am reliable on the index of the div (13) and that may change for different companies, hence i am unable to automate it.
Please assist with a scalable solution
PS: I am a Product Manager with basic coding expertise as I was a developer few years ago.
At some point you need to "hardcode" something unless you have some other means of mapping the content of the page to your spreadsheet. In your example you appear to be targeting "Sales Growth" percentage. If you are not comfortable hardcoding the index of the div (13), you could identify it by the id of the "Sales Growth" label which is mainContent_lblSalesGrowthorCasa.
For example, change your
//*[#id="mainContent_updAddRatios"]/div[13]/p/span
to:
//*[#id = "mainContent_updAddRatios"]/div[.//span/#id = "mainContent_lblSalesGrowthorCasa"]/p/span
which is selecting the div based on the div containing a span with id="mainContent_lblSalesGrowthorCasa". Ultimately, whether you "hardcode" the exact index of the div or "hardcode" the ids of the nodes, you are still embedding assumptions regarding the structure of page.
Thanks #david, that helped.
Two questions
What if the structure of the page would change? Example: If the website decided to remove the p tag then would my sheet fail? How do we avoid failure in such cases?
Also, since every id is unique, the probability of that getting changed is lesser than the index being changed. Correct me, if I am wrong?
What do we do when the elements don't have an id like Profit Growth, RoE, RoCE etc

(Google sheets) Query and return multiple tables from url inject

I use the JSON data from a Google spreadsheet, for 2 mobile applications (iOS and Android). The same information can be outputted using HTML or XML, in this case I am using HTML so the information shown (from the spreadsheet) can be understood by everyone. The only logical way to do this is without Authentication (O’Auth) is through public URL Injects. Information about what I’m talking can be found here. In order to understand what I’m asking, you have to actually click the links and see for yourself. I do not know what to “call” some of the things I’m asking as Google’s documation is poor, no fault of my own.
In my app I have a search feature that queries the spreadsheet (USING A URL REQUEST) along the lines of this,
https://docs.google.com/spreadsheets/d/1yyHaR2wihF8gLf40k1jrPfzTZ9uKWJKRmFSB519X8Bc/gviz/tq?tqx=out:html&tq=select+A,B,C,D,E+where+(B+contains"Cat")&gid=0
I select my columns (A, B, C, D, and, E) and ask (Google) that only the rows where column B contains the word cat be return. Again I’m stressing the point that this is done via a URL address (inject being the proper term). I CANNOT use almost any function/formulas that would normally work within a spreadsheet like, ArrayFormula or ImportRange. In fact I only access to 10 language clauses (Read link from before). I have a rather well knowledge of spreadsheets and databases, and as the URL method of getting information from them is similar they are in NO way the same thing.
Now, I would like to point out this part within the URL
tq?tqx=out:html&tq=select+A,B,C,D,E+where+(B+contains"Cat")&gid=0
Type of output, HTML in this case
tqx=out:html
The start of query
&tq=
Select columns A-E
select+A,B,C,D,E
For returning specific information about Cat
where+(B+contains"Cat")
This is probably the most important part of my question. This is used for specifying what table (Tab) is being queried.
&gid=0
If the gid is changed from gid=0 to gid=181437435 the data returned is from the spreadsheets second table. Instead of having to make 2 requests to search both tables is there a way to do both in one request? (like combining the 2) <— THIS IS WHAT I’M ASKING.
There is a AND clause that I have tried all over the url
select+A,B,C,D,E+where+(B+contains%20"Cat")&gid=181437435+AND+select+A,B,C,D,E+where+(B+contains%20"Cat")&gid=0
I have even flipped the gid around and put in other places but it seems to only go by the last one (gid) in the url, and no matter what is done only 1 table is returned. Grouping is allowed by the way. If that doesn’t clear my question up then let me know where you’re lost. Also I would have posted more URLs for easy access but I am kind of on this 2 URL maximum program.
If I understand your requirement, indeed it is, with syntax like this for example:
=ArrayFormula(QUERY({Sheet1!A1:C4;Sheet2!B1:D4},"select * order by Col1 desc"))
The ; stacks one array above the other (, for side by side).
My confusions is with "URL Query Language" as what here is called Google Query Language (there is even the tag though IMO almost all those Qs belong on Web Applications - including this one, by my understanding!) is not confined to use with URLs.
In the example above the sheet references might be replaced with data import functions.

How can I smartly extract information from an HTML page?

I am building something that can more or less extract key information from an arbitrary web site. For example, if I crawled a McDonalds page and wanted to figure out programatically the opening and closing time of McDonalds, what is an intelligent way to do it?
In a general case, maybe I also want to find out whether McDonalds sells chicken wings, or the address of McDonalds.
What I am thinking is that I will have a specific case for time, wings, and address and have code that is unique for each of those 3 cases.
But I am not sure how I can approach this. I have the sites crawled and HTML and related information parsed into JSON already. My current approach is something like finding the title tag and checking if the title tag contains key words like address or location, etc. If the title contains those key words, then I will look through the current page and identify chunks of content that resemble an address, such as content that are cities or countries or content that has the word St or Street inside.
I am wondering if there is a better approach to look for key data, and looking for a nicer starting point or bounce some ideas and whatnot. Or even if there are good articles to read about this would be great as well.
Let me know if this is unclear.
Thanks for the help.
In order to parse such HTML pages you have to have knowlege about their structure. There's no general solution for this problem. Each webpage needs its own solution. However, a good approach would be to ensure the HTML code is valid XML too and then use XPath to access elements at known positions. Maybe there's even an XPath like solution for standard HTML (which is not always valid xml). This way you can define a set of XPaths for each page which give you the specific elements if they exist.

Two pages with the same title and/or meta-description within one domain

Is there any penalty on Google rankings for using two pages with the same title and/or meta-description? If so, what is the penalty?
Both pages are on the same domain. One page URL is example.com/abcd and the other page URL is example.com/uvwxyz. The H1 header for both pages is the same, and both have the same meta-description.
I don't think Google would punish this.
Think of YouTube (which is owned by Google): The content of the title element follows this schema: [user-contributed video title] - YouTube. The meta-description consists of the user-contributed video description.
Now, there are probably thousands of videos with the very same title ("My cute cat") and some of them could even have the same description ("See my cute cat").
However, if a website consists of many (or even only) pages with same title and meta-description, it gambles away the possibilty for a better ranking. But when all these pages really have different content, it won't be punished.
Title, Meta Description are among the signals which search engines uses to identify topic of the page and rank them in search results. Weight of Title is high in search rankings & both title/description are displayed in search results along with URL.
As you have mentioned content of both pages are different, than by
having duplicate title/description you are loosing some opportunity
of targeting different keywords for search rankings.
Having same title/description makes it difficult for both user as well as search
engines to identify & differentiate between them.
Even though there is no negative influence, but you are loosing on important signal (title) which can help in improving search ranking.
Some ref reading on title: http://www.searchenabler.com/blog/title-tag-optimization-tips-for-seo/
& duplicate content: http://www.searchenabler.com/blog/learn-seo-duplicate-content/
There is not a punishment per se' it just isn't best practice to use. Why will you have duplicate meta information? Is the information the same on each page? Does it need to be?

Represent the search result by adding relevant description

I'm developing simple search engine.If I search some thing using my search engine it will produce the list of urls which are relating with that search query.
I want to represent the search result by giving small,relevant description under each resulting url.(eg:- if we search something on google,you can see they will provide small description with the each resulting link.)
Any idea..?
Thank in advance!
You need to store position of each word in a webpage while indexing.
your index should contain- word id , document id of the document containing this word, number of occurrence of the word in that document , all the positions where the word occurred.
For more info you can read the research paper by Google founders-
The Anatomy of a Large-Scale Hypertextual Web Search Engine
You can fetch the meta content of that page and display it as a small description . Google also does this.

Resources