detecting addresses using NER and spaCy - named-entity-recognition

How do I detect addresses using NER and spaCy. NER detects names, organizations, dates, book titles, etc. While there are a lot of pre-trained models for names, there are none for addresses.
In the same way is it possible to detect addresses, street names, cities, zip codes, etc. Also, after detecting the address, is it possible to say whether it is Real or Fake?
Also, I find it difficult to get the datasets. Kindly provide me with your suggestions or the best approach.
Trying to get a similar visualization for the addresses also.
Thank you

Related

NER for predefined entities

I'm developing a application to categorize requirements in a requirement specification in to categories like database, front end, back end, etc. Requirement specification is a single document where I want to see the underlying categories in it. Can I use NER to get the categories? Sentences are divided in to categories if they contain certain words that match that particular category.
Example
data should be stored in a secured database.
If we consider above given sentence is a requirement it should be categorized in to database category considering the words it contains (database, data).
To my best of knowledge, NER pre-built tools would not help, but I recommend you to use spaCy. It is a NER tools with state-of-the-art accuracy and support re-train your DL model and customize it. Hope this helps!

Machine Learning - Derive information from a text

I'm a newbie in the field of Machine Learning and Supervised learning.
My task is the following: from the name of a movie file on a disk, I'd like to retrieve some metadata about the file. I have no control on how the file is named, but it has a title and one or more additional info, like a release year, a resolution, actor names and so on.
Currently I have developed a rule heuristic-based system, where I split the name into tokens and try to understand what each word could represent, either alone or with adjacent ones. For detecting people names for example, I'm using a dataset of english names, and score the word as being a potential person's name if I find it in the dataset. If adjacent to it is a word that I scored as a potential surname, I score the two words as being an actor. And so on. It works with a decent accuracy, but changing heuristic scores manually to "teach" the system is tedious and unpredictable.
Such a rule-based system is hard to maintain or develop further, so, out of curiosity, I was exploring the field of machine learning. What I would like to know is:
Is there some kind of public literature about these kinds of problems?
Is ML a good way to approach the problem, given the limited data set available?
How would I proceed to debug or try to understand the results of such a machine? I already have problems with the "simplistic" heuristic engine I have developed..
Thanks, any advice would be appreciated.
You need to look into NLP (natural language processing). NLP deals with text processing and other things; for example entity recognition and tagging.
Here is an example of using Spacy library: https://spacy.io/usage/linguistic-features.
Some time ago I did a similar thing, you can see it here: https://github.com/Erlemar/Erlemar.github.io/blob/master/Notebooks/Fate_Zero_explore.ipynb

Advice on classifying users in machine learning scenario

I'm looking for some advice in the problem of classifying users into various groups based on there answers to a sign up process.
The idea is that these classifications will group people with similar travel habits, i.e. adventurous, relaxing, foodie etc. This shouldn't be a classification known to the user, so isn't as simple as just asking what sort of holidays they like ( The point is to remove user bias/not really knowing where to place yourself).
The way I see it working is asking questions such as apps they use, accounts they interact with on social media (gopro, restaurants etc) , giving some scenarios and asking which sounds best, these would be chosen from a set provided to them, hence we have control over the variables. The main problem I have is how to get numerical values associated to each of these.
I've looked into various Machine learning algorithms and have realised this is most likely a clustering problem but I cant seem to figure out how to use this style of question to assign a value to each dimension that will actually give a useful categorisation.
Another question I have is whether there is some resources where I could find information on the sort of questions to ask users to gain information that'd allow classification like this.
The sort of process I envision is one similar to https://www.thread.com/signup/introduction if anyone is familiar with it.
Any advice welcomed.
The problem you have at hand is that you want to calculate a similarity measure based on categorical variables, which is the choice of their apps, accounts etc. Unless you measure the similarity of these apps with respect to an attribute such as how foodie is the app, it would be a hard problem to specify. Also, you would need to know all the possible states a categorical variable can assume to create a similarity measure like this.
If the final objective is to recommend something that similar people (based on app selection or social media account selection) have liked or enjoyed, you should look into collaborative filtering.
If your feature space is well defined and static (known apps, known accounts, limited set with few missing values) then look into content based recommendation systems, something as simple as Market Basket Analysis can give you a reasonable working model.
Else if you really want to model the system with a bunch of features that can assume random states, this could be done with multivariate probabilistic models, if the structure (relationships and influences between features) is well defined, you could benefit from Probabilistic Graphical Models, such as Bayesian Networks.
You really do need to define your problem better before you start solving it though.
You can use prime numbers. If each choice on the list of all possible choices is assigned a different prime, and the user's selection is saved as a product, then you will always know if the user has made a particular choice if the modulo of selection/choice is 0. Beauty of prime numbers, voila!

Where to find domain-specific corpus for a text mining task?

I am working on a text mining project which focus on the computer technology documents. So there're many jargons. Tasks like part-of-speech tagging require some training data to built a pos-tagger. And I think this training data should be from the same domain with words like ".NET, COM, JAVA" correctly tagged.
So where can I find such corpus? Or is there any work around? Or can we tune an existing tagger to handle domain specific task?
Gathering training data (and defining features) is going to be the hardest step of this problem. I'm sure there are datasets out there. But an alternative option for you would be to identify a few journals or news sites that focus on your area of interest and crawl them and pull down the text, perhaps validating each article you pull down by searching for keywords. I've done that before to develop a corpus focused on elections.
Unfortunately, it is domain-specific where you can find such a corpus.
Catch-22. There is no general source for specialized data.
Just like there is no universal software to solve domain-specific problems.

Semantic analysis of tweets

I have know how to communicate with twitter and how to retrieve tweets but I am looking for further working on these tweets.
I have two categories food and sports. Now I want to categorize tweets into food and sports. Can anyone please suggest me how to categorize on basis of computer algorithm?
regards
Gaurav
I've been doing some work recently with Latent Dirichlet Allocation. The general idea is that documents contain words that are generated from topics. What you could try doing is loading a corpus of documents known to be about the topics you are interested in, update with the tweets of interest, and then select tweets that have strong probabilities for the same topics as your known documents.
I use R for LDA (package:topicmodels and package:lda), but I think there are some prebuilt python tools for this too. I would probably steer away from trying to write your own unless you have a solid grounding in Bayesian statistics.
Here's the documentation for the topicmodels package: http://cran.r-project.org/web/packages/topicmodels/vignettes/topicmodels.pdf
I doubt that a set of algorithm could possibly categorize tweets in open domain. In other words I don't think a set of rules can possibly categorizes open domain tweets. You need to parse tweets into a semantic representation customized for the categorization.

Resources