Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been given an assignment to create a program that might involve a quiz generator. I decided to come to you guys since you seem to be the most helpful.
Is an automatic quiz generator possible?
Is it that automatic, or do you have to enter your own questions and correct answers?
Can it work for other things rather than boolean answers (true and false)?
Can it observe text syntax so that it can create questions based on a paragraph of information?
can it observe text syntax so that it can accept answers that are close to the right answer, but is off by a few words?
This would be very helpful if you could help me, as this question has me stumped right now.
You guys always come through though, so I await your answer :D!
P.S. - I've seen other questions like this, but it covered only stuff like randomization. I believe that would be possible, but I'm wondering if "true" generators are possible.
Is an automatic quiz generator possible?
It depends on what you call automatic, and what you consider a successful level of functionality. Something is definitely possible.
Is it that automatic, or do you have to enter your own questions and correct answers? Can it observe text syntax so that it can create questions based on a paragraph of information?
Yes, that's possible, but again there's a spectrum from only working for the simplest text and being easily confused (which is relatively easy to program - even a regular expression parser could do that), through to handling arbitrary real-world textual sources and getting say 80%+ of the facts out of the text and posing sensible questions for which it correctly identified the answer (which might take a team of 100 language and programming experts decades). Language analysis is difficult. If you want proof - try converting a paragraph of English text to another language using Babelfish or similar online translator, then convert it back... :-).
Can it work for other things rather than boolean answers (true and false)?
Of course, but again the more complex you make it, the less likely you'll get anything that works...
can it observe text syntax so that it can accept answers that are close to the right answer, but is off by a few words?
It could, but the range of ways someone might phrase an answer is so varied that having to follow a simple template with a few words' tolerance wouldn't work well in general use.
General thoughts
Why don't you search for existing educational quiz programs to get an idea of what other people have achieved...?
I would make an automatic math quiz generator, as a simple example.
Questions could be generated easily, just come up with 2 random numbers that fit certain characteristics, and randomly add/subtract/multiply them. Then mathematically add them together.
But, for non-math subjects, a quiz generator would be more difficult, it would need some kind of a database to draw from of sample questions.
Is an automatic quiz generator possible?
Yes, an automatic quiz general is possible.
Is it that automatic, or do you have to enter your own questions and correct answers?
You could make it automated, but that would require access to a large database and very complex data mining algorithms. If it's an assignment, you would probably be better off having it take in questions and their corresponding answers. A mathematics quiz generator would be much easier to implement, as it would only require random operators and operands placed in the correct sequence.
Can it work for other things rather than boolean answers (true and false)?
This depends entirely on your implementation, but theoretically yes.
Can it observe text syntax so that it can create questions based on a paragraph of information?
If you have an awesome data mining script and resources to form grammatically-correct sentences with raw information, then yes.
can it observe text syntax so that it can accept answers that are close to the right answer, but is off by a few words?
Producing an algorithm to reliably evaluate different sentences with the same meanings as the same would be very difficult. You would need to account for spelling and grammatical errors as well as synonyms and many other factors. Furthermore, it would be very language (not programming language) dependent.
I hope this answered some of your questions.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am looking for an engine that does AI text summarization based on the concept or meaning of the sentence, I looked at open-source projects like (ginger, paraphrase, ace) but they don't do the job.
The way they work is that they try to find synonyms for each word and replace with the current words, this way they generate alot of alternatives to a sentence but the meaning is wrong most of the times.
I have worked with Stanford's engine to do something like highlights to an article and based on that extract the most important sentences, but still this is not abstraction, its extraction.
It would also make sense that the engine I'm looking for learns over time and results are improved after each summary.
Please help out here, your help is greatly appreciated!
I don’t know any open source project which fits your requirements about abstraction and a meaning as I assume.
But I have an ideas how to build such engine and how to train it.
In a few words I think we all keep in mind some Bayesian-network like structure in our minds, with helps us not only to classify some data, but also to form an abstract meaning about text or message.
Since it is impossible to extract all that abstract categories structure from our mind I think it’s better to build mechanism which allow as to reconstruct it step-by-step.
Abstract
The key idea of the proposed solution is in the extraction of meaning of a conversation using approaches which easier in operation with it from an automated computer system. This will allow creating the good level of illusion of real conversation with another person.
Proposed model supports two levels of abstraction:
First of them, less complex level consists in the recognition of groups of words or a single word as a group which related to the category, instance or to the instance attribute.
Instance means instantiation from the general category of the real or abstract subject, object, action, attribute or other kind of instances. As an example – concrete relation between two or more subjects: concrete relations between employer and employee, concrete city and country where it’s situated and so on.
This basic meaning recognition approach allows us to create bot with ability sustain a conversation. This ability based on recognition of basic elements of meaning: categories, instances and instances attributes.
Second, the most complicated method based on scenario recognition and storing them into the conversation context with instances/categories as well as using them for completion some of recognized scenarios.
Related scenarios will be used to complete the next message of the conversation as well as some of scenarios can be used to generate the next message or for recognizing meaning element by using of conditions and by using meaning elements from the context.
Something like that:
Basic classification should be entered manually and with future correction/addition of the teachers.
Words from sentence in conversation and scenarios from sentence can be filled from context
Conversation scenarios/categories can be fulfilled by previously recognized instances or with instances described in future conversation (self-learning)
Pic 1 – word detection/categorization basically flow vision
Pic 2 – general system vision big picture view
Pic 3 - meaning element classification
Pic 4 – basically categories structure could be like that
I'm working on a project to generate questions from sentences. Right now, I'm at a point where I can generate questions like:
"Angela Merkel is the chancelor of Germany." -> "Angela Merkel is who?"
Now, of course, I want the questions to look like "Who is...?" instead. Is there any easy way to do this that I haven't thought of yet?
My current idea would be to train an English(not quite question) -> English(question) translator, maybe using existing machine translation engines like moses. Is this overkill? How much data would I need? Are there corpora that address this or a similar problem? Is using a general translation engine even appropriate for this task?
Check out Michael Heilman's dissertation Automatic Factual Question Generation from Text for background on question generation and to see what his approach to this problem looks like. You can find more by searching for research on "question generation". He mentions a corpus from Microsoft: the Microsoft Research Question-Answering Corpus.
I don't think that an approach based solely on (current) statistical machine translation approaches is going to work that well, since you're usually going to need a deeper syntactic analysis of the source sentence to do a good job of generating an appropriate question. For simple questions like your example, it's pretty easy to design syntactic tree transformations to generate the question, but it gets much trickier as soon as the sentences get a little more complicated.
Off the top of my head, if you restrict yourself to relatively simple questions, you could do a parse, and then flip around the elements to get the question. How do you decide the question word though? Who, What, Where, Why... for this you'll need a classifier that looks at the elements of a sentence. Angela Merkel should be easy to classify as a person/name, so she gets s 'Who', Berlin should be in a dictionary of geos, so it gets 'Where'.
I'm not sure about specific software, but I'd probably do it with NLTK, using a dependency parse and then whatever classification scheme you feel like.
Ultimately your success depends on how big your input and output space is. I'd go for the absolute simplest possible problem first.
What would be an intelligent way to store text, so that it can be intelligently parsed and translated later on.
For example, The employee is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
The above could be the generic text which is shown to the user prior to evaluation. If the user is a Male (say Shaun) or female (say Mary), the above text should be translated as follows.
Mary is outstanding as she can identify her own strengths and weaknesses and is comfortable with herself.
Shaun is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
How do we store the evaluation criteria in the first place with appropriate place or token holders. (In the above case employee should be translated to employee name and based on his gender the words he or she, himself or herself needs to be translated)
Is there a mechanism to automatically translate the text with the above information.
The basic idea of doing something like this is called Mail Merge.
This page seems to discus how to implement something like this in Ruby.
[Edit]
A google search gave me this - http://freemarker.org/
I don't know much about this library, but it looks like what you need.
This is a very broad question in the field of Natural Language Processing. There are numerous ways to go around it, the questions you asked seem too broad.
If I understand correctly part of your question this could be done this way :
#variable{name} is outstanding as #gender{he/she} can identify #gender{his/hers} own strengths and weaknesses and is comfortable with #gender{himself/herself}.
Or:
#name is outstanding as #he can identify #his own strengths and weaknesses and is comfortable with #himself.
... if gender is the major problem.
I have had some experience working with a tool called Grammatica, when building a custom user input excel like formula parsing and evaluation engine. It may not be to the level of sophistication you're looking for but it's a start. This basically uses many of the same concepts that popular code compiler parsers employ. It's definitely worth checking out.
I agree with Kornel, this question is too broad. What you seem to be talking about is semantics for which RDF's and OWL can be a good starting point. Read about modeling semantics using markup and you can work your way up from there.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
G'day,
While having a think about this question here about overdesigning for possible future changes it got me thinking.
What reasons against can you provide to people who insist on blowing out designs because "they might want to use it somewhere else at some stage in the future"?
Similarly, what do you do when people take the requirements and then come back with a bloated design with lots of extra "bells and whistles" that you didn't ask for?
I can understand extending a design when you know it makes sense for requirements or possible uses that exist either right now or in the near future. And I'm not advocating just blithely accepting a list of requirements and implementing that explicitly without providing any feedback on what you think may be missing.
I am talking about what to do when people insist on adding, or having, extraneous functionality so that "we might use it somewhere else at some stage in the future?"
Plenty of good reasons on Wikipedia.
http://en.wikipedia.org/wiki/You_Ain%27t_Gonna_Need_It
The time spent is taken from adding, testing or improving necessary
functionality.
The new features must be debugged, documented, and supported.
Any new feature imposes constraints on what can be done in the future, so
an unnecessary feature now may prevent
implementing a necessary feature
later.
Until the feature is actually needed, it is difficult to fully
define what it should do and to test
it. If the new feature is not properly
defined and tested, it may not work
right, even if it eventually is
needed.
It leads to code bloat; the software becomes larger and more
complicated.
Unless there are specifications and some kind of revision control, the
feature may not be known to
programmers who could make use of it.
Adding the new feature may suggest other new features. If these new
features are implemented as well, this
may result in a snowball effect
towards creeping featurism.
See also: http://en.wikipedia.org/wiki/KISS_principle
Especially on an embedded device, size is money (larger Flash part, say, which costs more and has a longer programming time at manufacture; or more peripheral components).
Even on a Windows application, the larger your application and the more features it has, the more it costs to develop; wait until you know what's needed and what's not and you'll waste far less money developing stuff that turns out not to be what was needed at all.
Also, any additional functionality brings with it the potential for bugs.
It's good to think properly about the requirements before developing something, but over-designing is often just borrowing trouble.
Them: "We might use it somewhere else at some stage in the future."
Me: "Yes, we might. And we might not. And we can't know, now, in what ways we might want to. And if we do want it at some stage in the future - that's the time when we'll know the ways in which we want it. That's when we can write it with confidence. On the other hand, if we write it today, but never need it, we've wasted resources to develop something we didn't need. And we've added to our code bloat, so it's harder to find the pieces of our code base that are in use, because we've got all this (presently) unnecessary code crowding out the useful stuff."
In our team we just say "YAGNI". It reminds people why. There is MORE than enough stuff on the web about YAGNI if you think you need to collate it to provide a report.
Actually, having someone say "YAGNI" to you on our team cuts a little because it's like saying "C'mon; you know better than that", which always hurts a little. :)
It's a balance, as a rule I over design only where it's cheap to do so. For instance, I wouldn't write a function that takes in 2 parameters and adds them, instead I'd write a function that takes n-parameters and adds them. However, I don't write a function that takes n-parameters and adds them using assembly.
You say
I can understand extending a design when you know it makes sense for requirements or possible uses that exist either right now or in the near future.
and I guess that sometimes people see this line as blurry something that "makes sense" to you may be over-design to someone else.
Overdesign (solving a problem in a way that is more generic than needed) a specific piece of architecture is acceptable only if you can afford it.
If you accept extraneous functionality (which is generally speaking different from overdesign) you need - again - to accept the costs that come with it (time ==> money) - if you can't afford those extra costs then you got your answer :)
There's a big difference in providing for future functionality and adding future functionality. A good design should have the "hooks" or whatever to provide for new features or modifications.
There are two ways that you can possibly handle this situation. The first way is that if they are contractors and providing the software to you. You can simply refuse to pay for all of the extra functionality and impose very strict deadlines for your required functionality. If they miss the deadline then you impose financial penalties for every day they are late.
The other way is if they actually work for you. If this is the case then you can get rid of them or downgrade them in their performance reviews.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Are there any methods/systems that you have in place to incentivize your development team members to write "good" code and add comments to their code? I recognize that "good" is a subjective term and it relates to an earlier question about measuring the maintainability of code as one measurement of good code.
This is tough as incentive pay is considered harmful. My best suggestion would be to pick several goals that all have to be met simultaneously, rather than one that can be exploited.
While most people respond that code reviews are a good way to ensure high quality code, and rightfully so, they don't seem to me to be a direct incentive to getting there. However, coming up with a positive incentive for good code is difficult because the concept of good code has large areas that fall in the realm of opinion and almost any system can be gamed.
At all the jobs I have had, the good developers were intrinsically motivated to write good code. Chicken and egg, feedback, catch 22, call it what you will, the best way to get good code is to hire motivated developers. Creating an environment where good developers want to work is probably the best incentive I can think of. I'm not sure which is harder, creating the environment or finding the developers. Neither is easy, but both are worth it in the long term.
I have found that one part of creating an environment where good developers want to work includes ensuring situations where developers talk about code. I don't know a skilled programmer that doesn't appreciate a good critique of his code. This helps the people that like to be the best get better. As a smaller sub-part of this endeavor, and thus an indirect incentive to create good code, I think code reviews work wonderfully. And yes, your code quality should gain some direct benefit as well.
Another technique co-workers and I have used to communicate good coding habits is a group review in code. It was less formal and allowed people to show off new techniques, tools, features. Critiques were made, kudos were given publicly, and most developers didn't seem to mind speaking in front of a small developer group where they knew everyone. If management cannot see the benefit in this, spring for sammiches and call it a brown bag. Devs will like free food too.
We also made an effort to get people to go to code events. Granted, depending on how familiar you all are with the topic, you might not learn too much, but it keeps people thinking about code for a while and gets people talking in an even more relaxed environment. Most devs will also show up if you offer to pick up a round or two of drinks afterwards.
Wait a second, I noticed another theme. Free food! Seriously though, the point is to create an environment where people that already write good code and those that are eager to learn want to work.
Code reviews, done well, can make a huge difference. No one wants to be the guy presenting code that causes everyone's eyes to bleed.
Unfortunately, reviews don't always scale well either up (too many cooks and so on) or down (we're way too busy coding to review code). Thankfully, there are some tips on Stack Overflow.
I think formal code reviews fill this purpose. I'm a little more careful not to commit crappy looking code knowing that at least two other developers on my team are going to review it.
Make criteria public and do not connect incentives with any sort of automation. Publicize examples of what you are looking for. Be nice and encourage people to publicize their own bad examples (and how they corrected them).
Part of the culture of the team is what "good code" is; it's subjective to many people, but a functioning team should have a clear answer that everyone on the team agrees upon. Anyone who doesn't agree will bring the team down.
I don't think money is a good idea. The reason being is that it is an extrinsic motivator. People will begin to follow the rules, because there is a financial incentive to do so, and this doesn't always work. Studies have shown that as people age financial incentives are less of a motivator. That being said, the quality of work in this situation will only be equal to the level you set to receive the reward. It's a short term win nothing more.
The real way to incent people to do the right thing is to convince them their work will become more rewarding. They'll be better at what they do and how efficient they are. The only real way to incentivize people is to get them to want to do it.
This is advice aimed at you, not your boss.
Always remind yourself of the fact that if you go that extra mile and write as good code as you can now, that'll pay off later when you don't have refactor your stuff for a week.
I think the best incentive for writing good code is by writing good code together. The more people write code in the same areas of the project, the more likely it will be that code conventions, exception handling, commenting, indenting and general thought process will be closer to each other.
Not all code is going to be uniform, but upkeep usually gets easier when people have coded a lot of work together since you can pick up on styles and come up with best practice as a team.
You get rid of the ones that don't write good code.
I'm completely serious.
I agree with Bill The Lizard. But I wanted to add onto what Bill had to say...
Something that can be done (assuming resources are available) is to get some of the other developers (maybe 1 who knows something about your work, 1 who knows your work intimately, and maybe 1 who knows very little about it) together and you walk them through your code. You can use a projector and sit them down in a room and you can drive through all of your changes. This way, you have a mixed crowd that can provide input, ask questions, and above all make you a better developer.
There is no need to have only negative feedback; however, it will happen at times. It is important to take negative as constructive, and perhaps try to couch your feedback in a constructive way when giving feedback.
The idea here is that, if you have comment blocks for your functions, or a comment block that explains some tricky math operations, or a simple commented line that explains why you are required to change the date format depending on the language selected...then you will not be required to instruct the group line by line what your code is doing. This is a way to annotate changes you have made and it allows for the other developers to keep thinking about the fuzzy logic you had in your previous function because they can read your comments and see what you did else-where.
This is all coming from a real life experience and we continue to use this approach at my job.
Hope this helps, good question!
Hm.
Maybe the development team should do code-reviews of each other codes. That could motivate them to write better, commented code.
Code quality may be like pornography - as the famous quote from the Justice Potter Stewart goes, "I know it when I see it"
So one way is to ask others about the code quality. Some ways of doing that are...
Code reviews by their peers (and reviews of others code by them), with ease of comprehension being one of the criteria in the review checklist (personally, I don't think that necessarily means comments; sometimes code can be perfectly clear without them)
Request that issues caused by code quality are raised at retrospectives (you do hold retrospectives, right?)
Track how often changes to their code works first time, or whether it takes several attempts?
Ask for peer reviews at the annuak (or whatever) review time, and include a question about how easy it is to work with the reviewee's code as one of the questions.
Be very careful with incentivizing: "What gets measured gets done". If you reward lines of code, you get bloated code. If you reward commenting, you get unnecessary comments. If you reward lack of bugs found in the code, your developers will do their own QA work which should be done by lower-paid QA specialists. Instead of incentivizing parts of the process, give bonuses for the whole team's success, or the whole company's.
IMO, a good code review process is the best way to ensure high code quality. Pair programming can work too, depending on the team, as a way of spreading good practices.
The last person who broke the build or shipped code that caused a technical support call has to make the tea until somebody else does it next. The trouble is this person probably won't give the tea the attention it requires to make a real good cuppa.
I usually don't offer my team monetary awards, since they don't do much and we really can't afford them, but I usually sit down with each team member and go over the code with them individually, pointing out what works ("good" code) and what does not ("bad" code). This seems to work very well, since I don't get nearly as much junk code as I did before we started this process.