semantic search that finds sentences that could be expressed visually - machine-learning

Let's say I want to build a search engine that goes through a text and finds sentences or paragraphs that could be turned into an image, video or 3d-animation. So sentences that contain information that could be expressed visually.
Ideally, this search engine would get better over time.
Is there already search engine that could to that?
If not, which type of things would I need to look at/consider? My point here being that I don't really know much about machine learning and search engines. I am trying to get a feeling of which areas of machine learning, information retrieval and so forth I would need to look at.
I don't expect long answers here, just things like "well, take a look at this type of machine learning" or "this part of information retrieval theory may be relevant".
Just to get a broad overview of what I would need to look at.

Natural Language Understanding
I don't know about any existing search engine doing that. But this can be done with the help of Natural Language Understanding and Semantic Parsing.
Have a look at Stanford's Natural Language Understanding course (discussion of the text-to-scene problem can be found here) for further details.

How semantic search works is, it analysis data and put them into a 3-D vector space. One it's done with the help of bid data and knowledge graph the algorithm will try to find data points that connect to the article, the authority of the author, website relevance, and a couple of other factors. Once these factors are factored in, it then tries to create co-relate data to create a layer of information interconnected to each other. Once these information's are gathered then it is used to arrive at a conclusion to decide how relevant the data is.

Related

Using NLP or other algorithms to mach two strings

My goal of a project is to correctly assign medications. I have a large catalog at my disposal for this purpose. However, the medications do not appear there in exactly the same spelling. Possibly additional information was added or possible parts of the prescription were abbreviated.
I was already able to implement a possible algorithm using the Levensthein distance (token_set_ratio).
Because of the sometimes long additional information this algorithm assigns wrong medications, I wanted to ask if there are better algorithms for comparing strings. For example, does it make sense to implement machine learning algorithms or NLP technology? This is a relatively new area for me. I would appreciate any ideas or inspiration.
This sounds like a classic Deduplication task. For example, have a look at dedupe. This tool lets you annotate training examples and learns when two items refer to the same thing. It can be used with as few as 10 training sanples and has an active learning approach implemented.

How to start an AI that extracts relevant phrases from datasheets?

in my previous question I asked something similar, but I don't know how to beginn this task.
I have to extract phrases from datasheets and as I have a ton of training data (PDFs and the according extracted information), I want to manage this via Deep Learning, as every datasheet could be different, but the structure is always similar.
But how to start? What to google? What is the term for my plans?
I hope someone has a concrete way for me.

How to generate a Concept tree from set of keywords

I am looking for an approach in NLP , where i can generate a concept tree from a set of keywords.
Here is the scenario, i have extracted a set of keywords from a research paper. Now i want to arrange these keywords in form of a tree where most general keyword comes on top. At next level of tree will have keywords that are important to understand upper level concept and will be more specific as compared to upper level keywords. And the same way tree will grow.
Something like this :
I know there are many resources that can help me to solve this problem. Like Wikipedia dataset, Wordnet. But i do not know how to proceed with them.
My preferred programming language is Python. Do you know any python library or package which generate this?
I am also very interested to see the use of Machine learning approach to solve this problem.
I will really appreciate your any kind of help.
One way of looking at the problem is, given a set of documents, identify topics from them and also the dependencies between the topics.
So, for example, if you have some research papers as input (large set of documents), the output would be what topics the papers are on and how those topics are related in a hierarchy/tree. One research area that tries to tackle this is Hierarchical topic modeling and you can read more about this here and here.
But if you are just looking at creating a tree out of a bunch of keywords (that are somehow obtained) and no other information is available, then it needs knowledge of real world relationships and can perhaps be a rule-based system where we define Math --> Algebra and so on.
There is no way for a system to understand that algebra comes under math other than by looking at large no. of documents and inferring that relationship (see first suggestion) or if we manually map that relationship (perhaps, a rule-based system). That is how even humans learn those relationships.

Ordering movie tickets with ChatBot

My question is related to the project I've just started working on, and it's a ChatBot.
The bot I want to build has a pretty simple task. It has to automatize the process of purchasing movie tickets. This is pretty close domain and the bot has all the required access to the cinema database. Of course it is okay for the bot to answer like “I don’t know” if user message is not related to the process of ordering movie tickets.
I already created a simple demo just to show it to a few people and see if they are interested in such a product. The demo uses simple DFA approach and some easy text matching with stemming. I hacked it in a day and it turned out that users were impressed that they are able to successfully order tickets they want. (The demo uses a connection to the cinema database to provide users all needed information to order tickets they desire).
My current goal is to create the next version, a more advanced one, especially in terms of Natural Language Understanding. For example, the demo version asks users to provide only one information in a single message, and doesn’t recognize if they provided more relevant information (movie title and time for example). I read that an useful technique here is called "Frame and slot semantics", and it seems to be promising, but I haven’t found any details about how to use this approach.
Moreover, I don’t know which approach is the best for improving Natural Language Understanding. For the most part, I consider:
Using “standard” NLP techniques in order to understand user messages better. For example, synonym databases, spelling correction, part of speech tags, train some statistical based classifiers to capture similarities and other relations between words (or between the whole sentences if it’s possible?) etc.
Use AIML to model the conversation flow. I’m not sure if it’s a good idea to use AIML in such a closed domain. I’ve never used it, so that’s the reason I’m asking.
Use a more “modern” approach and use neural networks to train a classifier for user messages classification. It might, however, require a lot of labeled data
Any other method I don’t know about?
Which approach is the most suitable for my goal?
Do you know where I can find more resources about how does “Frame and slot semantics” work in details? I'm referring to this PDF from Stanford when talking about frame and slot approach.
The question is pretty broad, but here are some thoughts and practical advice, based on experience with NLP and text-based machine learning in similar problem domains.
I'm assuming that although this is a "more advanced" version of your chatbot, the scope of work which can feasibly go into it is quite limited. In my opinion this is a very important factor as different methods widely differ in the amount and type of manual effort needed to make them work, and state-of-the-art techniques might be largely out of reach here.
Generally the two main approaches to consider would be rule-based and statistical. The first is traditionally more focused around pattern matching, and in the setting you describe (limited effort can be invested), would involve manually dealing with rules and/or patterns. An example for this approach would be using a closed- (but large) set of templates to match against user input (e.g. using regular expressions). This approach often has a "glass ceiling" in terms of performance, but can lead to pretty good results relatively quickly.
The statistical approach is more about giving some ML algorithm a bunch of data and letting it extract regularities from it, focusing the manual effort in collecting and labeling a good training set. In my opinion, in order to get "good enough" results the amount of data you'll need might be prohibitively large, unless you can come up with a way to easily collect large amounts of at least partially labeled data.
Practically I would suggest considering a hybrid approach here. Use some ML-based statistical general tools to extract information from user input, then apply manually built rules/ templates. For instance, you could use Google's Parsey McParseface to do syntactic parsing, then apply some rule engine on the results, e.g. match the verb against a list of possible actions like "buy", use the extracted grammatical relationships to find candidates for movie names, etc. This should get you to pretty good results quickly, as the strength of the syntactic parser would allow "understanding" even elaborate and potentially confusing sentences.
I would also suggest postponing some of the elements you think about doing, like spell-correction, and even stemming and synonyms DB - since the problem is relatively closed, you'll probably have better ROI from investing in a rule/template-framework and manual rule creation. This advice also applies to explicit modeling of conversation flow.

Can Machine Learning be used for Natural Language Understanding

This is based on my earlier question. Can I use machine learning algorithms to help me with understanding sentences?
(I will use a closely related example as I used in my previous question). For example, I want my algorithm/code to start a program based on what the user says. For instance, if he says "turn on the program," then the algorithm should do that. However, if the user says "turn on the car," the computer shouldn't turn on the program, obviously. (BUT how would a computer know?) I am sure there are hundreds of different ways to say "start" or "turn on the program." What I am asking is how can a computer differentiate between "program" and "car"? How can the algorithm know that in the first sentence, it has to start the program, but not in the second one? Is there a way for the algorithm to know what the sentence is talking about?
Could I use an unsupervised learning algorithm for this, that is, one that can learn what the sentence is about?
Thanks
Natural language understanding is a very hard problem and many researchers are working on it. To begin with, basic Natural language understanding systems start off as rule based. You manually write down rules that will be matched against an input, and if a match is found, you fire a corresponding action. So, you restrict the format of your input and come up with rules while keeping them as general as possible. For example, instead of matching the exact statement "turn on the program", you can have a rule such as: unless the word "program" occurs in the command, don't start the program, OR ignore every sentence unless it contains "program". And then, combine your rules to develop more complex "understanding". How to write/represent rules is another tough problem. You can start off with Regular Expressions.
Regarding various ways of expressing the action of "Start"ing something, you are going to look at Synonyms for "start", e.g. "begin". This can be obtained from a thesaurus and a commonly used resource for such tasks is WordNet
You need to figure out what information do you exactly want to extract from the sentence. Most natural language techniques are task specific, there isn't be a general one-size-fits-all natural language understanding tool.
no machine learning algorithms can learn without enough information input. If there are enough information about a car versus a program, then the learning algorithms may differentiate them. Machine learning group things that have similar properties and separate them into different group if thing has different properties.

Resources