Acronym for a good JIRA ticket? - jira

I saw an acronym for what makes a "good" JIRA ticket on wikipedia that I can't find it anymore.
It was something along the lines of quantifiable, actionable, defined, etc. The acronym did not actually spell out "JIRA."
Does anyone have a reference to the acronym I refer to? I would appreciate it.

Most likely, you mean the acronym S.M.A.R.T..
The single letters mean:
Specific
Measurable
Achievable
Relevant
Time-bound
For more information, here is link to Wikipedia:
https://en.m.wikipedia.org/wiki/SMART_criteria

Related

How to Convert NLP Question to Knowledge Graph triple?

I have what I think is a simple question. I am trying to put together a question answering system and I am having trouble converting a natural question to a knowledge graph triple. Here is an example of what I mean:
Assume I have a prebuilt knowledge graph with the relationship:
((Todd) -[:picked_up_by]-> (Jane))
How can I make this conversion:
"Who picked up Todd today?" -> ((Todd) -[:picked_up_by]-> (?))
I am aware that there is a field dedicated to "Relationship Extraction", but I don't think that this fits that problem if I could name it, "question triple extraction" would be the name of what I am trying to do.
Generally speaking, it looks like a relation extraction problem, with your custom relations. Since the question is too generic, this is not an answer, just some links.
Check out reading comprehension: projects on github and lecture by Christopher Manning
Also, look up Semantic Role Labeling.

What does mean ?page=1

Suppose I have an URL like this https://example.com?page=1 or have https://example.com?text=1. Here what does mean by ?page=1 or ?text=1. Some website like youtube there I can see that they use like https://youtube.com?watch=zcDchec . What does mean it?
Please explain anyone. I need to know this.
That’s the query component (indicated by the first ?).
It’s a common convention to use key=value pairs, separated by &, in the query component. But it’s up to the author what to use. And it’s up to the author to decide what it should mean.
In practice, the query component ?page=1 will likely point to page 1 of something, while ?page=2 will point to page 2, etc. But there is nothing special about this. The author could as well have used the path component, e.g. /page/1 and /page/2.

Find location from text

I am currently thinking of how to find a location from a text, such as a blogpost, without the user having to input any additional information. For example a post could look like this:
"Aberdeen, With a Foot on the Seafloor
Since the early 1970s, Aberdeen, Scotland, has evolved from a gritty fishing town into the world’s center of innovation in technology for the offshore energy industry."
By reading it I realize that the post is about Aberdeen Scotland but how can I geotag it? I have been using the geocoder (https://github.com/alexreisner/geocoder) by Alex Reisner but it seems weird to check every word against the google/nominatim(osm). My initial idea was to simply bruteforce it by checking every word with the geocoder and try to see if there are similarities between the words. But it seems like there could be a better way around this.
Has anyone done anything similar to this? Any algorithm that could be suggested (or gem :) ) would be immensely appreciated!
I'm sure there have been projects dedicated to this - for example, google's uncanny ability to geotag and pick data out of your personal emails effortlessly.
The most obvious answer I can see here, would be to create a few regular expressions for locations. The most simple one would be for City, Country:
Regexp.new("((?:[a-z][a-z]+))(.)(\\s+)((?:[a-z][a-z]+))",Regexp::IGNORECASE);
This would recognize Aberdeen, Scotland, but also course, I or even thanks, bye. It would be a start though, to query only those recognized spots instead of every word in the document.
There are also widely known regular expressions for addresses, cities, etc. You could use those as well if you find your algorithm missing matches.
Cheers!

How do you think the "Quick Add" feature in Google Calendar works?

Am thinking about a project which might use similar functionality to how "Quick Add" handles parsing natural language into something that can be understood with some level of semantics. I'm interested in understanding this better and wondered what your thoughts were on how this might be implemented.
If you're unfamiliar with what "Quick Add" is, check out Google's KB about it.
6/4/10 Update
Additional research on "Natural Language Parsing" (NLP) yields results which are MUCH broader than what I feel is actually implemented in something like "Quick Add". Given that this feature expects specific types of input rather than the true free-form text, I'm thinking this is a much more narrow implementation of NLP. If anyone could suggest more narrow topic matter that I could research rather than the entire breadth of NLP, it would be greatly appreciated.
That said, I've found a nice collection of resources about NLP including this great FAQ.
I would start by deciding on a standard way to represent all the information I'm interested in: event name, start/end time (and date), guest list, location. For example, I might use an XML notation like this:
<event>
<name>meet Sam</name>
<starttime>16:30 07/06/2010</starttime>
<endtime>17:30 07/06/2010</endtime>
</event>
I'd then aim to build up a corpus of diary entries about dates, annotated with their XML forms. How would I collect the data? Well, if I was Google, I'd probably have all sorts of ways. Since I'm me, I'd probably start by writing down all the ways I could think of to express this sort of stuff, then annotating it by hand. If I could add to this by going through friends' e-mails and whatnot, so much the better.
Now I've got a corpus, it can serve as a set of unit tests. I need to code a parser to fit the tests. The parser should translate a string of natural language into the logical form of my annotation. First, it should split the string into its constituent words. This is is called tokenising, and there is off-the-shelf software available to do it. (For example, see NLTK.) To interpret the words, I would look for patterns in the data: for example, text following 'at' or 'in' should be tagged as a location; 'for X minutes' means I need to add that number of minutes to the start time to get the end time. Statistical methods would probably be overkill here - it's best to create a series of hand-coded rules that express your own knowledge of how to interpret the words, phrases and constructions in this domain.
It would seem that there's really no narrow approach to this problem. I wanted to avoid having to pull along the entirety of NLP to figure out a solution, but I haven't found any alternative. I'll update this if I find a really great solution later.

Blinding a latex paper

Many journals require submission of a blinded version of your paper. The blinded version usually removes:
the list of authors
any citations to the authors' work
How can I create a blinded version of my manuscript without doing this manually?
You could use the extract package to generate a separate LaTeX source file for the paper with the author list and acknowledgements automatically removed. I believe the versions package could be used to do this as well.
As for blinding the citations, I think the point is to refer to your own work in a neutral and unrevealing way. Here are some examples of correct and incorrect ways to cite your own work in a blinded manuscript:
Incorrect: "As I have argued elsewhere..."
Corrected: "As Jones (2001) has argued..."
Incorrect: "As I argue in (Jones 2001)."
Corrected: "As Jones (2001) argues."
Incorrect: "This argument is fleshed out in my (2001)."
Corrected: "Jones (2001) makes this argument in more detail."
One thing that is often overlooked in these cases, is that you do not really need to remove citations of your own work, as long as you refer to them in the third person. Instead of writing "As shown in our previous paper (Removed for review) ...", you should write "As shown by Smith et al. (Smith, Jones, and Adams, 2008)...".
This reads much better and saves you some of the trouble, while typically satisfying the submission rules for journals and conference. This way all you have to do is remove the list of authors, which is not hard to do by hand.
Hadley, you say it's necessary to remove all references to your own work. Is that really the case? At a certain career stage, the absence of a set of citations could very well give away the identity of the author!
I will betray my old-fart status by advising you to ignore those who advise using the active voice. Changing "Hadley (2009) showed that the sky is blue" to "The sky is blue (Hadley 2009)" is a simple solution to the problem of "I" (and "we" and "some of us" and "some of us, plus others") in technical writing. And it has an additional bonus: it focusses attention on the matter at hand, not on the people involved. You want the reader's internal eye to see that broad sweep of azure, not your avatar ;-)
To remove the list of authors, you could use \renewcommand to "edit" your \maketitle in order to avoid listing article's authors. For author's citations, you could do the same.
For example.
\renewcommand{\maketitle}{\title}

Resources