How to evaluate a device to the relation with theories such as: "senses (Visual, Auditory, Haptic) and cognition (short term and long term memory)"? - hci

How can I evaluate a computerized device or a software application in the HCI field to the relation with these theories such as: "Senses (Visual, Auditory, Haptic) and cognition (short term and long term memory)" and based on the context where the device is used? Any help or advice is appreciated.

My guess would be that the senses part would be covered by:
how pleasing is the device/software.
how real is the application.
and following from that:
immersion
Virtual reality is a big thing in the HCI world. Fire fighters, pilots, the army etc etc use virtual worlds to do more and more of their training, it would be important for them to actually feel like they are there so they react more naturally.
What I can think of for short term and long term cognition:
menu sizes.
categorization.
how many clicks does it take to do X.
These all help a user to remember how to achieve task X and where it was located in the software. (I guess that's all long term...)
I hope this inspires you a bit. Go to http://scholar.google.com/ and find some papers on the subjects, at the very least these papers will explain how they evaluate what they are testing, if you can't find a paper that discusses the evaluation techniques themselves.
Hint: If you are studying at a university the university usually has already paid for full access to the papers. Access scholar.google from a computer at university or use a vpn to connect through your university. Direct links to the papers are located on the right of the search result. As a bonus you can configure scholar to add a link with the bibTeX information!
The first result I got was a chapter of a book on user interfaces, which is about testing the user interface. Happy hunting!

There are a set of heuristics that, at least from my knowledge, have collectively become an industry standard method for evaluating interfaces. They were developed by Jakob Nielsen and can be found here:
http://www.useit.com/papers/heuristic/heuristic_list.html
Usually when someone says they are performing a "Heuristic Evaluation" or an "Expert Evaluation" that means they are basing it off of these 10 criteria. It's possible your professor is looking for something along these lines. I had a very similar experience in two courses I took recently, I had to write papers evaluating several interfaces on Nielsen's Heuristics.
A couple other useful links:
http://www.westendweb.com/usability/02heuristics.htm
http://www.morebusiness.com/getting_started/website/d913059671.brc
http://www.stcsig.org/usability/topics/articles/he-checklist.html
Hope this helps, good luck!

Related

References for Z3 - how does it work[internal theory]?

I am interested in reading the internal theory behind Z3. Specifically I want to read, how Z3 SMT solver works, and how it is able to find counterexamples for an incorrect model. I wish to be able to manually work out a trace for some very simple example.
However, all Z3 references seem to be how to code in it; or a very high level description of their algorithm. I am unable to find a description of the algorithms used. Is this information not made public by Microsoft?
Could anyone quote any references (papers/books) which give a comprehensive insight into Z3's theory + working?
My personal opinion is that the best reference to start with is Kroening and Strichman's Decision Procedures book. (Make sure to get the 2nd edition as it has good updates!) It covers almost all topics of interest to certain depth, and has many references at the back for you to follow up on. The book also has a website http://www.decision-procedures.org with extra readings, slides, and project ideas.
Another book of interest in this field is Bradley and Manna's The Calculus of Computation. While this book isn't specific to SAT/SMT, it covers many of the similar topics and how these ideas play out in the realm of program verification. Also see http://theory.stanford.edu/~arbrad/pivc/index.html for the associated software/tools.
Of course neither of these books are specific to z3, so you won't find anything detailed about how z3 itself is constructed in them. For programming z3 and some of the theory behind it, the "tutorial" paper by Bjørner, de Moura, Nachmanson, and Wintersteiger is a great read.
Once you go through these, I suggest reading individual papers by the developers, depending on where your interests are:
Bjørner: https://www.microsoft.com/en-us/research/people/nbjorner/publications/
de Moura: https://www.microsoft.com/en-us/research/people/leonardo/publications/
Wintersteiger: https://www.microsoft.com/en-us/research/people/cwinter/publications/
Nachmanson: https://www.microsoft.com/en-us/research/people/levnach/publications/
And there's of course a plethora of resources on the internet, many papers, presentations, slide-decks etc. Feel free to ask specific questions directly in this forum, or for truly z3 internal specific questions, you can use their discussions forum.
Note Regarding the differences between editions of Kroening and Strichman's book, here’s what the authors have to say:
The first edition of this book was adopted as a textbook in courses worldwide. It was published in 2008 and the field now called SMT was then in its infancy, without the standard terminology and canonic algorithms it has now; this second edition reflects these changes. It brings forward the DPLL(T) framework. It also expands the SAT chapter with modern SAT heuristics, and includes a new section about incremental satisfiability, and the related Constraints Satisfaction Problem (CSP). The chapter about quantifiers was expanded with a new section about general quantification using E-matching and a section about Effectively Propositional Reasoning (EPR). The book also includes a new chapter on the application of SMT in industrial software engineering and in computational biology, coauthored by Nikolaj Bjørner and Leonardo de Moura, and Hillel Kugler, respectively.

spell checker uses language model

I look for spell checker that could use language model.
I know there is a lot of good spell checkers such as Hunspell, however as I see it doesn't relate to context, so it only token-based spell checker.
for example,
I lick eating banana
so here at token-based level no misspellings at all, all words are correct, but there is no meaning in the sentence. However "smart" spell checker would recognize that "lick" is actually correctly written word, but may be the author meant "like" and then there is a meaning in the sentence.
I have a bunch of correctly written sentences in the specific domain, I want to train "smart" spell checker to recognize misspelling and to learn language model, such that it would recognize that even thought "lick" is written correctly, however the author meant "like".
I don't see that Hunspell has such feature, can you suggest any other spell checker, that could do so.
See "The Design of a Proofreading Software Service" by Raphael Mudge. He describes both the data sources (Wikipedia, blogs etc) and the algorithm (basically comparing probabilities) of his approach. The source of this system, After the Deadline, is available, but it's not actively maintained anymore.
One way to do this is via a character-based language model (rather than a word-based n-gram model). See my answer to Figuring out where to add punctuation in bad user generated content?. The problem you're describing is different, but you can apply a similar solution. And, as I noted there, the LingPipe tutorial is a pretty straightforward way of developing a proof-of-concept implementation.
One important difference - to capture more context, you may want to train a larger n-gram model than the one I recommended for punctuation restoration. Maybe 15-30 characters? You'll have to experiment a little there.

How can I select a FAQ entry from a user's natural-language inquiry?

I am working on an app where the user submits a series of questions. These questions are freeform text, but are based on a specific product, so I have a general understanding of the context. I have a FAQ listing, and I need to try to match the user's question to a question in the FAQ.
My language is Delphi. My general thought approach is to throw out small "garbage words", a, an, the, is, of, by, etc... Run a stemming program over these words to get the root words, and then try to match as many of the remaining words as possible.
Is there a better approach? I have thought about some type of natural language processing, but I am afraid that I would be looking at years of development, rather than a week or two.
You don't need to invent a new way of doing this. It's all been done before. What you need is called a FAQ finder, introduced by Hammond, et al in 1995 (FAQ finder: a case-based approach to knowledge navigation, 11th Conference on Artificial Intelligence for Applications).
AI Magazine included a paper by some of the same authors as the first paper that evaluated their implementation. Burke, et al, Question Answering from Frequently Asked Question Files: Experiences with the FAQ FINDER System, 1997. It describes two stages for how it works:
First, they use Smart, an information-retrieval system, to generate an initial set of candidate questions based on the user's input. It looks like it works similarly to what you described, stemming all the words and omitting anything on the stop list of short words.
Next, the candidates are scored against the user's query according to statistical similarity, semantic similarity, and coverage. (Read the paper for details.) Scoring semantic similarity relies on WordNet, which groups English words into sets of distinct concepts. The FAQ finder reviewed here was designed to cover all Usenet FAQs; since your covered domain is smaller, it might be feasible for you to apply more domain knowledge than the basics that WordNet provides.
Not sure if this solution is precisely what you're looking for, but if you're looking to parse natural language, you could use the Link-Grammar Parser.
Thankfully, I've translated this for use with Delphi (complete with a demo), which you can download (free and 100% open source) from this page on my blog.
In addition to your stemming approach, I suggest that you are going to need to look into one or more of the following:
Recognize important pairs or phrases (2 or more words). For example if your domain is a technical field, an important pair that should be automatically considered as a pair instead of individual words, where the pair of words means something special (in programming, "linked list", "serial port", etc, are more important in their meaning as a pair of words than individually).
A large list of synonyms ("turn == rotate", "open == access", etc ).
I would be tempted to tear apart "search engine" open source software in whatever language it was written in, and see what general techniques they use.

Heuristic Approaches to Finding Main Content

Wondering if anybody could point me in the direction of academic papers or related implementations of heuristic approaches to finding the real meat content of a particular webpage.
Obviously this is not a trivial task, since the problem description is so vague, but I think that we all have a general understanding about what is meant by the primary content of a page.
For example, it may include the story text for a news article, but might not include any navigational elements, legal disclaimers, related story teasers, comments, etc. Article titles, dates, author names, and other metadata fall in the grey category.
I imagine that the application value of such an approach is large, and would expect Google to be using it in some way in their search algorithm, so it would appear to me that this subject has been treated by academics in the past.
Any references?
One way to look at this would be as an information extraction problem.
As such, one high-level algorithm would be to collect multiple examples of the same page type and deduce parsing (or extraction) rules for the parts of the page which are different (this is likely to be the main topic). The intuition is that common boilerplate (header, footer, etc) and ads will eventually appear on multiple examples of those web pages, so by training on a few of them, you can quickly start to reliably identify this boilerplate/additional code and subsequently ignore it. It's not foolproof, but this is also the basis of web scraping technologies, both commercial and academic, like RoadRunner:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.21.8672&rep=rep1&type=pdf
The citation is:
Valter Crescenzi, Giansalvatore Mecca,
Paolo Merialdo: RoadRunner: Towards
Automatic Data Extraction from Large
Web Sites. VLDB 2001: 109-118
There's also a well-cited survey of extraction technologies:
Alberto H. F. Laender , Berthier A.
Ribeiro-Neto , Altigran S. da Silva ,
Juliana S. Teixeira, A brief survey of
web data extraction tools, ACM SIGMOD
Record, v.31 n.2, June 2002
[doi>10.1145/565117.565137]

Intelligent text parsing and translation

What would be an intelligent way to store text, so that it can be intelligently parsed and translated later on.
For example, The employee is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
The above could be the generic text which is shown to the user prior to evaluation. If the user is a Male (say Shaun) or female (say Mary), the above text should be translated as follows.
Mary is outstanding as she can identify her own strengths and weaknesses and is comfortable with herself.
Shaun is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
How do we store the evaluation criteria in the first place with appropriate place or token holders. (In the above case employee should be translated to employee name and based on his gender the words he or she, himself or herself needs to be translated)
Is there a mechanism to automatically translate the text with the above information.
The basic idea of doing something like this is called Mail Merge.
This page seems to discus how to implement something like this in Ruby.
[Edit]
A google search gave me this - http://freemarker.org/
I don't know much about this library, but it looks like what you need.
This is a very broad question in the field of Natural Language Processing. There are numerous ways to go around it, the questions you asked seem too broad.
If I understand correctly part of your question this could be done this way :
#variable{name} is outstanding as #gender{he/she} can identify #gender{his/hers} own strengths and weaknesses and is comfortable with #gender{himself/herself}.
Or:
#name is outstanding as #he can identify #his own strengths and weaknesses and is comfortable with #himself.
... if gender is the major problem.
I have had some experience working with a tool called Grammatica, when building a custom user input excel like formula parsing and evaluation engine. It may not be to the level of sophistication you're looking for but it's a start. This basically uses many of the same concepts that popular code compiler parsers employ. It's definitely worth checking out.
I agree with Kornel, this question is too broad. What you seem to be talking about is semantics for which RDF's and OWL can be a good starting point. Read about modeling semantics using markup and you can work your way up from there.

Resources