I noticed that in OpenAPI Path Items and some other constructs you have both summary and description fields, what is the difference between those, and what is the purpose of each? For me, they seem to accomplish the same thing, and I did not find anything about this in the documentation. It might seem like a non-sense question at first, but since you can use OpenApi to generate API's code, use it in documentation etc. etc. I think it makes sense to clear up the purpose of these.
summary is short, description is more detailed.
Think of the summary as a short one or two sentence explanation of what the intended purpose of the element is. You won't be able to describe all the subtle details, but at a high level, it should be able to explain the purpose of the element. Many documentation tools will only display the summary when there's a list of different components or endpoints, so this is a good place to put just enough information to let an unfamiliar reader know if this is the thing that will let them do what they want to do.
On the other hand, description is where the full details should go. For example, if you have special enum values, you can include a table of the behavior of each value. If you have an endpoint with special behavior not easily defined in OpenAPI, this is where you'd explain to the reader those details.
Many elements may be straightforward and not need a lot of details, so you may find your summary is sufficient. Different documentation tools may automatically use summary if description is not present (or vice versa), though you will want to verify that your particular tool does this. My personal preference is to default to description, and only use summary when the description is too verbose.
Related
I am new to UML modelling and I was creating this use case and generalised the use cases into 4 main categories - Add info, View info, edit info and delete info. Each generalisation contains 6 use cases so in total it gives about 28 use cases and its difficult to fit them all in one system boundary whilst making it easy to understand. Can someone please advise please advise how I can overcome this?
Below is what i have so far. I have yet to add EditInformation and DeleteInformation but i've run out of space.
Many thanks
Use-cases are the description of the highest level behavior of the system. You shouldn't dwell into the lower level details. Normally, a system would have no more than several use-cases. According to your description, your actual use-cases are what you classified as categories: Add info, View info, Edit info and Delete info. Finer details such as ViewCar or AddBooking belong to the corresponding sequence/activity diagrams.
You can set for yourself a limit of how many artefacts you would put on a diagram at maximum. For example, you might say that all diagramms shall fit on a certain standard page format when printed. Typically, it is not the purpose of abstracting the use cases to a level where they all fit on one diagram but reveal no added value any more. You'd end up in a use case chart for a space rocket merely saying nothing but "launch spacecraft". Instead, try to identify such use cases on a highly abstract level and then you break them down into more detailed ones on subordinate diagrams. In most modeling tools you can create links between several diagrams. If you'd like to read more, I'd recommend Alistair Cockburn's standard book on use cases, ISBN 978-0201702255
How can I link to the documentation of functions, structures and classes on MSDN in a persistent manner?
Microsoft appears to break the links every few months for no good reason and I haven't come across a method to refer to the topics in a way that - for example - points to the latest version of the documentation always. Is there such a way?
Edit: what I am looking for is a way to "permalink" a certain piece of documentation without having to worry about it disappearing. Preferably those names should be meaningful as well, instead of the usual combination of letters and digits followed by .asp or .aspx, such as: http://msdn.microsoft.com/en-us/library/ms923723.aspx.
NB: I realize that, even though this relates to programming, it may not be suitable for SO, so feel free to suggest moving it to a more appropriate SE site.
Please give an example. The links don't actually break that often.
For instance, http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.aspx will never break, but http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest(v=vs.110).aspx is specific to the version.
I'm not talking about HTML tags, but tags used to describe blog posts, or youtube videos or questions on this site.
If I was crawling just a single website, I'd just use an xpath to extract the tag out, or even a regex if it's simple. But I'd like to be able to throw any web page at my extract_tags() function and get the tags listed.
I can imagine using some simple heuristics, like finding all HTML elements with id or class of 'tag', etc. However, this is pretty brittle and will probably fail for a huge number of web pages. What approach do you guys recommend for this problem?
Also, I'm aware of Zemanta and Open Calais, which both have ways to guess the tags for a piece of text, but that's not really the same as extracting tags real humans have already chosen. But I would still love to hear about any other services/APIs to guess the tags in a document.
EDIT: Just to be clear, a solution that already works for this would be great. But I'm guessing there's no open-source software that already does this, so I really just want to hear from people about possible approaches that could work for most cases. It need not be perfect.
EDIT2: For people suggesting a general solution that usually works is impossible, and that I must write custom scrapers for each website/engine, consider the arc90 readability tool. This tool is able to extract the article text for any given article on the web with surprising accuracy, using some sort of heuristic algorithm I believe. I have yet to dig into their approach, but it fits into a bookmarklet and does not seem too involved. I understand that extracting an article is probably simpler than extracting tags, but it should serve as an example of what's possible.
Systems like the arc90 example you give work by looking at things like the tag/text ratios and other heuristics. There is sufficent difference between the text content of the pages and the surrounding ads/menus etc. Other examples include tools that scrape emails or addresses. Here there are patterns that can be detected, locations that can be recognized. In the case of tags though you don't have much to help you uniqely distinguish a tag from normal text, its just a word or phrase like any other piece of text. A list of tags in a sidebar is very hard to distinguish from a navigation menu.
Some blogs like tumblr do have tags whose urls have the word "tagged" in them that you could use. Wordpress similarly has ".../tag/..." type urls for tags. Solutions like this would work for a large number of blogs independent of their individual page layout but they won't work everywhere.
If the sources expose their data as a feed (RSS/Atom) then you may be able to get the tags (or labels/categories/topics etc.) from this structured data.
Another option is to parse each web page and look for for tags formatted according to the rel=tag microformat.
Damn, was just going to suggest Open Calais. There's going to be no "great" way to do this. If you have some target platforms in mind, you could sniff for Wordpress, then see their link structure, and again for Flickr...
I think your only option is to write custom scripts for each site. To make things easier though you could look at AlchemyApi. They have simlar entity extraction capabilities as OpenCalais but they also have a "Structured Content Scraping" product which makes it a lot easier than writing xpaths by using simple visual constraints to identify pieces of a web page.
This is impossible because there isn't a well know, followed specification. Even different versions of the same engine could create different outputs - hey, using Wordpress a user can create his own markup.
If you're really interested in doing something like this, you should know it's going to be a real time consuming and ongoing project: you're going to create a lib that detects which "engine" is being used in a page, and parse it. If you can't detect a page for some reason, you create new rules to parse and move on.
I know this isn't the answer you're looking for, but I really can't see another option. I'm into Python, so I would use Scrapy for this since it's a complete framework for scraping: it's complete, well documented and really extensible.
Try making a Yahoo Pipe and running the source pages through the Term Extractor module. It may or may not give great results, but it's worth a try. Note - enable the V2 engine.
Looking at arc90 it seems they are also asking publishers to use semantically meaningful mark-up [see https://www.readability.com/publishers/guidelines/#view-exampleGuidelines] so they can parse it rather easily, but presumably they must either have developed a generic rules such as #dunelmtech suggested tag/text ratios, which can work with article detection, or they might be using with a combination of some text-segmentation algorithms (from Natural Language Processing field) such as TextTiler and C99 which could be quite usefull for article detection - see http://morphadorner.northwestern.edu/morphadorner/textsegmenter/ and google for more info on both [published in academic literature - google scholar].
It seems that, however, to detect "tags" as you required is a difficult problem (for already mentioned reasons in comments above). One approach I would try out would be to use one of the text-segmentation (C99 or TextTiler) algorithms to detect article start/end and then look for DIV's / SPAN's / ULs with CLASS & ID attributes containing ..tag.. in them, since in terms of page-layout's tags tend to be generally underneath the article and just above the comment feed this might work surprisingly well.
Anyway, would be interesting to see whether you got somewhere with the tag detection.
Martin
EDIT: I just found something that might really be helpfull. The algorithm is called VIPS [see: http://www.zjucadcg.cn/dengcai/VIPS/VIPS.html] and stands for Vision Based Page Segmentation. It is based on the idea that page content can be visually split into sections. Compared with DOM based methods, the segments obtained by VIPS are much more semantically aggregated. Noisy information, such as navigation, advertisement, and decoration can be easily removed because they are often placed in certain positions of a page. This could help you detect the tag block quite accurately!
there is a term extractor module in Drupal. (http://drupal.org/project/extractor) but it's only for Drupal 6.
What would be an intelligent way to store text, so that it can be intelligently parsed and translated later on.
For example, The employee is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
The above could be the generic text which is shown to the user prior to evaluation. If the user is a Male (say Shaun) or female (say Mary), the above text should be translated as follows.
Mary is outstanding as she can identify her own strengths and weaknesses and is comfortable with herself.
Shaun is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
How do we store the evaluation criteria in the first place with appropriate place or token holders. (In the above case employee should be translated to employee name and based on his gender the words he or she, himself or herself needs to be translated)
Is there a mechanism to automatically translate the text with the above information.
The basic idea of doing something like this is called Mail Merge.
This page seems to discus how to implement something like this in Ruby.
[Edit]
A google search gave me this - http://freemarker.org/
I don't know much about this library, but it looks like what you need.
This is a very broad question in the field of Natural Language Processing. There are numerous ways to go around it, the questions you asked seem too broad.
If I understand correctly part of your question this could be done this way :
#variable{name} is outstanding as #gender{he/she} can identify #gender{his/hers} own strengths and weaknesses and is comfortable with #gender{himself/herself}.
Or:
#name is outstanding as #he can identify #his own strengths and weaknesses and is comfortable with #himself.
... if gender is the major problem.
I have had some experience working with a tool called Grammatica, when building a custom user input excel like formula parsing and evaluation engine. It may not be to the level of sophistication you're looking for but it's a start. This basically uses many of the same concepts that popular code compiler parsers employ. It's definitely worth checking out.
I agree with Kornel, this question is too broad. What you seem to be talking about is semantics for which RDF's and OWL can be a good starting point. Read about modeling semantics using markup and you can work your way up from there.
Does a tool exist that can parse text and output that text, hyper-linked to Wikipedia entries for words of interest?
For example, I'd like a tool that could turn something like:
The most popular search algorithm on a
sorted list is the binary search.
Into:
The most popular search algorithm on a
sorted list is the binary search.
It would be wonderful if Wikipedia had an API which would do this since they would be best equipped to determine what "words of interests" are.
In my example I simply linked all combinations which linked directly to an entry except for The and most.
There is a tool that does exactly what you're asking for.
http: //wikify.appointment.at/
It's not perfect, but it works.
You have two separate problems to solve here:
Deciding which words should be linked
Determining if there's a suitable entry to link these words to
Now, (2) is simpler, though it's also somewhat problematic. Wikipedia seems to have an API that allows you to gather data efficiently, and they also allow "screen scraping". But there's a problem with disambiguation - sometimes you might hit not the entry you wanted. For example, python links to a disambiguation page, as it can be a programming language, a snake and a couple of other things.
(1) Is much harder, though. You can take the "simple approach" and attempt to find links for all non-trivial nouns (or even noun/adjective pairs). Non-trivial here means omitting words like "fiend, word, computer" etc.
But This would result in a plethora of links, which isn't convenient to read. It's really up to you to decide what's interesting in the text, and this depends a lot on the text itself. In an article for professional programmers, do you really want to link to "search algorithm" every time? But for beginners, perhaps you do.
To conclude, I strongly doubt there's a single general-purpose tool that will do the trick for you. But you surely have all the options at your hand, and something need-specific can be coded without too much effort.
Silviu Cucerzan of Microsoft Research tackled this problem. Well, not the problem of inserting the links, but the general issue of determining what entities are being mentioned in a some piece of text. Fortunately for you, he used Wikipedia articles as his set of entities. His paper, "Large-Scale Named Entity Disambiguation Based on Wikipedia Data", is available on his website. Direct link: pdf.