How can I determine if a word is a part of an english word or is a portmanteau (a word created by combining parts of valid English words)? - parsing

I am trying to create a validator that takes in words and tries to determine if the word is one of the following:
It is a valid English word
It is a part of an English word
It is an abbreviation
It is a portmanteau -- a word created by concatenating parts of valid English words
Are there Java or Python libraries/frameworks that can perform this task?
Samples of words: meds, ppg, reauthorization, appmetadata, reconsent, rawlog
I've tried Python NLTK (cursory investigation so far) and a Python library called enchant (this fails to identify many valid words/parts of words and portmanteaus).

Related

How to prepare data for weka in word sense disambiguation

I want to use weka for word sense diasambiguation. I prepared some files containing a Persian sentence, a tab, a Persian word, a tab and then an English word. they are in notepad++ in txt format. Now how should I use these files for weka? How should I change them?
The sample file:
https://www.dropbox.com/s/o7wtvrvkiir80la/F.txt?dl=0
I found it. The files should have the same number of columns. So I put the sentences in quotations, then a comma and the the English word in quotation. Above these, we should write proper relations and attributes.

What is parsing? (And differences from search and grep?

What exactly is parsing? I mean, generally. How different is parsing different from searching? On command line, if I use the grep tool/command; is that parsing?
For example, if I have just one string:
"Hello world! How are you doing today?"
and I tried to search (using grep or any other tool) whether the word "you" is within that string; is that parsing?
What if I do a web search; for example in Google? Is that parsing?
Or is parsing the name of the process that is a part of the process known as "Search"?
The verb "parse" is essentially related to the word "part", as in "part of speech". (See, for example, the on-line etymology dictionary.)
To "parse" a sentence has traditionally meant to break the sentence down into its component parts and identify their relationship with each other. For example, given "I asked a question.", we can parse it into a subject ("I"), a transitive verb in past tense ("asked"), and an object phrase consisting of an article ("a") and a noun ("question"). The parse indicates that the subject performed some action on the object; this is not the same statement as *"A question asked I", and not just because the latter is ungrammatical.
With the advent of computer languages and computational theory, the term "parsing" has been generalized to include analysis of strings which are not human languages. Some people would even use it to simply mean "to divide a string into its component parts", such as "parsing" a line in a CSV file into fields.
It's quite a stretch to apply that to merely searching for a string inside another string, although there may be contexts in which that is an acceptable use of the word. Personally, I would only use it for the action of completely deconstructing a structured string.

How to read a text file in ancient encoding?

There is a public project called Moby containing several word lists. Some files contain European alphabets symbols and were created in pre-Unicode time. Readme, dated 1993, reads:
"Foreign words commonly used in English usually include their
diacritical marks, for example, the acute accent e is denoted by ASCII
142."
Wikipedia says that the last ASCII symbol has number 127.
For example this file: http://www.gutenberg.org/files/3203/files/mobypos.txt contains symbols that I couldn't read in any of vatious Latin encodings. (There are plenty of such symbols in the very end of section of words beginning with B, just before C letter. )
Could someone advise please what encoding should be used for reading this file or how can it be converted to some readable modern encoding?
A little research suggests that the encoding for this page is Mac OS Roman, which has é at position 142. Viewing the page you linked and changing the encoding (in Chrome, View → Encoding → Western (Macintosh)) seems to display all the words correctly (it is incorrectly reporting ISO-8859-1).
How you deal with this depends on the language / tools you are using. Here’s an example of how you could convert into UTF-8 with Ruby:
require 'open-uri'
s = open('http://www.gutenberg.org/files/3203/files/mobypos.txt').read
s.force_encoding('macroman')
s.encode!('utf-8')
You are right in that ASCII only goes up to position 127 (it’s a 7-bit encoding), but there are a large number of 8 bit encodings that are supersets of ASCII and people sometimes refer to those as “Extended ASCII”. It appears that whoever wrote the readme you refer to didn’t know about the variety of encodings and thought the one he happened to be using at the time was universal.
There isn’t a general solution to problems like this, as there is no guaranteed way to determine the encoding of some text from the text itself. In this case I just used Wikipedia to look through a few until I found one that matched. Joel Spolsky’s article The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) is a good place to start reading about character sets and encodings if you want to learn more.

Which Alphabet type should I use with FASTA files in Biopython?

If I'm using the FASTA files from the link below, what Alphabet type should I use in Biopython? Would it be IUPAC.unambiguous_dna?
link to FASTA files: http://hgdownload.cse.ucsc.edu/goldenPath/hg19/chromosomes/?C=S;O=A
Did you read 3.1 Sequences and Alphabets? It explains the different alphabets available, and what cases they cover.
There's a lot of sequences in the link you provided (too many for us to pore through). My recommendation would be to just go with UnambiguousDNA. If the four basic nucleotides aren't enough, the parser will complain, and you should pick a more extensive alphabet.

What characters are allowed in twitter hashtags?

In developing an iOS app containing a twitter client, I must allow for user generated hashtags (which may be created elsewhere within the app, not just in the tweet body).
I would like to ensure any such hashtags are valid for twitter, so I would like to error check the entered value for invalid characters. Bear in mind that users may be from non-English speaking countries.
I am aware of the usual limitations, such as not beginning a hashtag with a number, and no special punctuation characters, but I was wondering if there is a known list of all additional characters that are technically allowed within hashtags (i.e. international characters).
Karl, as you've rightly pointed out, any word in any language can be a valid twitter hashtag (as long as it meets a number of basic criteria). As such what you are asking for is a list of valid international word characters. I'm sure someone has compiled such a list somewhere, but using it would not be the most efficient approach to reaching what appears to be your initial goal: ensuring that a given hashtag is valid for twitter.
I believe, what you are looking for is a regular expression that can match all word characters within a Unicode range. Such an expression would not be dependant on your locale and would match all characters in the modern typography that can appear as part of a word.
You didn't specify what language you are writing your app in, so I can't help you with a language specific implementation. However, the basic approach would be as follows:
Check if any of the bracket expressions or character classes already support Unicode character ranges in your language. If yes, then use them.
Check if there is regex modifier that can enable Unicode character range support for your language.
Most modern languages implement regular expressions in a fairly similar way and a lot of them borrow heavily from Perl, so I hope the following two example will put you on the right track:
Perl:
Use POSIX bracket expressions (eg: [[:alpha:]], [[:allnum:]], [[:digit:]], etc) as they give you greater control over the characters you want to match, compared to character classes (eg: \w).
Use /u modifier to enable Unicode support when pattern matching. Under this modifier, the ASCII platform effectively becomes a Unicode platform; and hence, for example, \w will match any of the more than 100,000 word characters in Unicode.
See Perl documentation for more info:
http://perldoc.perl.org/perlre.html#Character-set-modifiers
http://perldoc.perl.org/perlrecharclass.html#POSIX-Character-Classes
Ruby:
Use POSIX bracket expressions as they encompass non-ASCII characters. For instance, /\d/ matches only the ASCII decimal digits (0-9); whereas /[[:digit:]]/ matches any character in the Unicode Nd category.
See Ruby documentation for more info:
http://www.ruby-doc.org/core-2.1.1/Regexp.html#class-Regexp-label-Character+Classes
Examples:
Given a list of hashtags, the following regex will match all hashtags that start with a word character (inc. international word characters) followed by at least one other word character, a number or an underscore:
m/^#[[:alpha:]][[:alnum:]_]+$/u # Perl
/^#[[:alpha:]][[:alnum:]_]+$/ # Ruby
Twitter allows letters, numbers, and underscores.
I checked this by generating tweets via their API. For example, tweeting
Hash tag test #foo[bar
resulted in "#foo" being marked as a hash tag, and "[bar" being unformatted text.
Well, for starters you can't use a # in the hashtag (##hash).
The guidelines below are being quoted from Twitter's help center:
People use the hashtag symbol # before a relevant keyword or phrase (no spaces) in their Tweet to categorize those Tweets and help them show more easily in Twitter Search.
Clicking on a hashtagged word in any message shows you all other Tweets marked with that keyword.
Hashtags can occur anywhere in the Tweet – at the beginning, middle, or end.
Hashtagged words that become very popular are often Trending Topics.
Example: In the Tweet below, #eddie included the hashtag #FF. Users created this as shorthand for "Follow Friday," a weekly tradition where users recommend people that others should follow on Twitter. You'll see this on Fridays.
Using hashtags correctly:
If you Tweet with a hashtag on a public account, anyone who does a search for that hashtag may find your Tweet
Don't #spam #with #hashtags. Don't over-tag a single Tweet. (Best practices recommend using no more than 2 hashtags per Tweet.)
Use hashtags only on Tweets relevant to the topic.
Just want to add that in addition to alphanumeric characters and underscore, you can apparently use em dash in a Twitter hashtag like #COVIDー19.
Only letters and numbers are allowed to be part of a hashtag. If a character other than these follows the leading # and a letter or number, the hashtag will be cut off at this point.
I would recommend that your user interface indicate this to the user by changing the text color of the input field if the user enters anything other than a letter or number.
I had the same issue to implement in golang.
It seems allowed chars with [[:alpha:]] is only English-alphabet and could not use this syntax for other language characters.
Instead, I could use \p{L} for this purpose.
My test with \p{L} is here.
* Arabic, Hebrew, Hindi...etc is not confirmed yet.

Resources