Find almost-duplicate strings in Objective-C on iOS - ios

I have a list of song tracks that I uploaded from the iTunes API. Some of them are duplicates, but not perfect duplicates. For example, one might say "All 4 u" vs "All for you", or "Some song" vs "some song feat. some other artist"
I want to be able to identify the duplicates. Is the best way to compute the Levenshtein distance for all pairs? That seems excessive.
I'm working in the Cocoa Touch framework for iOS programming so if anyone knows of any libraries that would help a lot.

Why do you consider computing the Levenshtein distance excessive? What algorithm would you use if you were sitting down to a list with pencil and paper?
That said, Levenshtein is likely necessary, but not sufficient. I would start by normalizing the strings. In some cases, a string might normalize a couple of ways and you'll need to do both. Normalization would look like:
convert to lowercase
Strip any leading numbers followed by punctuation ( "1.", "1 - ", etc.)
Tentatively strip anything after "feat." or "with"
This is an example of special knowledge about your problem set. You're going to have to use a lot of special knowledge like this.
"Tentatively" means you should probably keep both the stripped and non-stripped versions of the string
Keep in mind that things including "feat." might be remixes, so you have to be careful about assuming duplicates. This is of course true of almost any attempt at de-dupping. There are often multiple versions.
Tentatively expand common abbreviations (u=>you, 4=>for, 2=>two, w/=>with, etc. etc.)
Tentatively strip anything in parentheses
Strip English articles (a, an, the). Maybe even strip all very short words (3 or less characters) as a first pass.
Doing this well is complicated and will require a lot of trial and error. I've done a lot of contact de-dupping in the past, and one piece of advice: start conservative. It is very easy to accidentally de-dupe way too much. Build a big list of test data that you've de-duped by hand and test, test, test after every algorithm change. Make sure your UI can present the user with anything you're uncertain about, because there are going to be many, many records that you can't be certain about. (This is true even when you do it by hand. Look at a big list of human-entered titles and tell me which ones are duplicates 100% without listening to the tracks. A computer isn't going to do better than you at this.)
I'm not aware of any publicly available library for this. It's been solved by many people many times (search for "dedupe song titles" or anything similar). But it's generally commercial software.
One more piece of advice for this, since it's a huge O(n^2) or worse problem. Look for bucketing opportunities. If you can match artists first, then albums, then tracks, you can divide and conquer in much less time.

Related

word game advice

My background is as a linguist, and so for new years decide to learn computer language, c#, out of interest and so that I can make small word games for my children and students.
I have started to look at word games and have been reading about using char types and char arrays which I have been playing with so I have been able to generate the alphabet.
What I really want to do is have a word appear with random letters missing and then letters of the alphabet appears and player needs to select correct letter to complete word.
I am not after code, as an educator I am not fond of cheating, just advice on where I could start, what should I be reading about so that I can achieve what I described.
Many thanks in advance for help and advice
If I understand your question correctly, you're looking for an algorithm (or even pseudocode) rather than code or anything else. If I was to implement a game as you described, I would go about it the following fashion:
Select a word from a list. This "dictionary" could be as simple as a text file containing different words, or a more complex database of all words in the English language.
Pick a letter from the word and remove it.
Ask the user for the missing letter. Keep asking until they guess correctly or they run out of guesses.
Rise and repeat.
This is a pretty simple game, which uses pretty basic concepts. I believe the XNA would be complete overkill in this situation. Like Mustafa mentioned in the comment on the original post, XNA provides a framework that makes game progamming easier because it provides templates, but it also adds a lot of overhead and needless complexity (especially for a novice programmer.) Since you're coming from a non-programming background, I would suggest Python or Ruby as a good starting language, and suggest looking into the following topics:
Reading from file (the "dictionary" mentioned above)
Loops, specifically for-loops and while-loops or the language equivalent (to allow the user to keep guessing until they run out of guesses or guess correctly)
Command-line input/output (IO) -- print to screen and read input from the console.
Arrays and Strings
Once you've built out a working command-line application, then I would suggest looking into things like Graphical User Interfaces (GUIs) and making it look "pretty."

Designing a Non-Specific Language Application, e.g. planning for localization

Made this community wiki :3
I'm developing a basic RPG, and one of my goals from the beginning is to make sure that my program is language non-specific. Basically, before I design or start programming any menus, I want to make sure that I can load and display them out of supported languages so I am not hard-coding in values.
(It would save me from many migranes down the road)
For this example, let's use Western Left-to-Right languages. English, Spanish, German, French, Italian.
This is a basic example of what I have.
One XML file contains a mapping and design of a conversation.
<conversation>
<dialog>line1</dialog>
<dialog>line2</dialog>
</conversation>
Other XML files contains the definitions.
<mappings language="English">
<line1>This is line 1 in English!</line1>
<line2>Other lines are contained in language-separated xml files</line2>
</mappings>
Heh. This would work great, besides the fact that I forgot that English doesn't assign genders to their words, whereas other languages do. So, where one sentence might be enough in English, I might need to have two sentences in other languages, one to cover the masuline tense and the other to cover the feminine tense.
What would be the most condusive way of solving this problem? Right now, I've considered coming up with different mapping tables, one excuslively for masculine-tense sentences whereas the other table would cover just feminine-tenses. Or just reading from different defintion tables.
And another kicker would be based within my game data design. I never thought about it, but I might need to store within my game items and characters their sexes so I can use the correct sentence. However, other languages might have their own specific quirks that I would need to consider as well (though thankfully, from what I know Italian and Spanish are relatively similar, and French possibly as well.)
So, obviously this is a huge task ahead of me. What other design considerations should I think of? Rightnow, I'm thinking a static class would be easiest. Configure selected language at startup, throw in inputs and hopefully get a string back.
Any ideas (looking to throw ideas around :P)
There's two general ways to approach this: brute force and trying to be clever. Brute force means writing each possible line and including it with your XML files. It's a lot of work, but it will work.
Trying to be clever gets into deep water, fairly fast, particularly if you're trying to cover a whole lot of languages.
You need to keep more information about characters than gender. In Russian, for example, there are different words meaning "you" depending on whether you're being informal or formal (or talking to multiple people), and the verb endings are also different. There are different translations of "please pass the bread" depending on the formality. In other languages, getting the translation right depends on social status.
There are issues, as pawel_dyda pointed out, with singular, plural, and possibly dual case. Other languages also use different word orders: "The arrows are X coppers each, so to buy Y arrows you'll need Z silver" may require you to keep track of the order of the numbers.
Visual C++ and MFC come with internationalization facilities that are actually pretty good. You'd keep the strings in a resource file, and it's possible to substitute numbers and the like in while keeping the order correct for different languages.
Look up "internationalization" (often abbreviated to "i18n") on the web. There's plenty of stuff out there.
As for genders you may try encourage translators to use non-gender specific translations (which is usually possible in business applications but might be impossible here).
You may have also encounter the problem somewhere else. Other (non-English) languages have multiple plural forms. For example: "Your team has acquired 2 swords". No matter how many swords you will actually receive, be it 5 or 1000, in English you will always end up with one plural sentence. But this is not the case in many languages.

Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

<tl;dr>
In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches?
</tl;dr>
<introduction>
I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this.
So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing.
So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it.
Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged.
Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons.
</introduction>
<optimizations>
Chop the compared sequences into multiple subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence.
Pro: Changes the time increase as the changed area gets bigger from a quadratic
increase to something more similar to a linear increase.
Con: Figuring out where to split already seems like you have to solve edit distance
except now you don't care how it is changed. Would be fine if this was solvable by
a process closer to solving hamming distance but a single insertion would throw this
off.
Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves.
Pro: The operation of comparing two integers is faster than the operation of comparing
two strings, so a slight performance gain is received after every comparison, which
can be a lot overall.
Con: Using a cryptographic hash function takes time to convert all the sequence
elements and may end up costing more time to do the conversion that you gain back from
the integer comparisons. You could use the built in hash function for a string but
that will not guarantee uniqueness.
Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here.
Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst
case scenario is that time, best case becomes practically linear, and average case is
somewhere between the two.
Con: It is an algorithm I've only seen implemented in functional programming languages
and I am having a difficult time comprehending how to convert this into Ruby based on
how it is described at the site linked to above.
Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs.
Pro: I have to imagine that evaluating something like this in could be a LOT faster.
Con: I have no idea how Rails handles apps with ruby code that has C extensions and it
hurts the portability of the app.
This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n).
Pro: Would make going back to an old version a lot faster.
Con: It would mean every new commit, the delta-tree would get a new root node that
will cost time to reorganize the delta-tree for an operation that will be carried out
a lot more often than going back a version, not to mention the unlikelihood it will be
an old version.
</optimizations>
So are these things worth the effort?
With regard to item 4 in your list, this seems to be ( from what I can tell ) how most gems work if there is any heavy lifting to be done by the code. Rails plays nice with the gem system, so you should find that if you need to incorporate this - probably alongside other optimisations you have suggested here - it should be fine, although you may need to recompile for different platforms.

Background reading for parsing sloppy / quirky / "almost structured" data?

I'm maintaining a program that needs to parse out data that is present in an "almost structured" form in text. i.e. various programs that produce it use slightly different formats, it may have been printed out and OCR'd back in (yeah, I know) with errors, etc. so I need to use heuristics that guess how it was produced and apply different quirks modes, etc. It's frustrating, because I'm somewhat familiar with the theory and practice of parsing if things are well behaved, and there are nice parsing frameworks etc. out there, but the unreliability of the data has led me to write some very sloppy ad-hoc code. It's OK at the moment but I'm worried that as I expand it to process more variations and more complex data, things will get out of hand. So my question is:
Since there are a fair number of existing commercial products that do related things ("quirks modes" in web browsers, error interpretation in compilers, even natural language processing and data mining, etc.) I'm sure some smart people have put thought into this, and tried to develop a theory, so what are the best sources for background reading on parsing unprincipled data in as principled a manner as possible?
I realize this is somewhat open-ended, but my problem is that I think I need more background to even know what the right questions to ask are.
Given the choice between what you've proposed and fighting a hungry crocodile while covered in raw-beef-flavored marmalade and both hands tied behind my back, I'd choose the ...
Well, OK on a more serious note, if you have data that doesn't abide by the any "sane" structure, you have to study the data and find frequencies of quirks in it and correlate the data for the given context (i.e. how it was generated)
Print to OCR to get the data in is almost always going to lead to heart break. The company I work for employs a veritable army of people who manually read such documents and hand "code" (i.e. enter by hand) the data for known problematic OCR scenarios, or documents our customers detect the original OCR failed on.
As for leveraging "Parsing Frameworks" these tend to expect data that will always follow the grammar rules you've laid out. The data you've described has no such guarantees. If you go that route be prepared for unexpected - though not always obvious - failures.
By all means if there is any way possible to get the original data files, do so. Or if you can demand that those providing the data make their data come in a single well defined format, even better. (It might not be "YOUR" format, but at least it's a regular and predictable format you can convert from)

Tool to parse text for possible Wikipedia links

Does a tool exist that can parse text and output that text, hyper-linked to Wikipedia entries for words of interest?
For example, I'd like a tool that could turn something like:
The most popular search algorithm on a
sorted list is the binary search.
Into:
The most popular search algorithm on a
sorted list is the binary search.
It would be wonderful if Wikipedia had an API which would do this since they would be best equipped to determine what "words of interests" are.
In my example I simply linked all combinations which linked directly to an entry except for The and most.
There is a tool that does exactly what you're asking for.
http: //wikify.appointment.at/
It's not perfect, but it works.
You have two separate problems to solve here:
Deciding which words should be linked
Determining if there's a suitable entry to link these words to
Now, (2) is simpler, though it's also somewhat problematic. Wikipedia seems to have an API that allows you to gather data efficiently, and they also allow "screen scraping". But there's a problem with disambiguation - sometimes you might hit not the entry you wanted. For example, python links to a disambiguation page, as it can be a programming language, a snake and a couple of other things.
(1) Is much harder, though. You can take the "simple approach" and attempt to find links for all non-trivial nouns (or even noun/adjective pairs). Non-trivial here means omitting words like "fiend, word, computer" etc.
But This would result in a plethora of links, which isn't convenient to read. It's really up to you to decide what's interesting in the text, and this depends a lot on the text itself. In an article for professional programmers, do you really want to link to "search algorithm" every time? But for beginners, perhaps you do.
To conclude, I strongly doubt there's a single general-purpose tool that will do the trick for you. But you surely have all the options at your hand, and something need-specific can be coded without too much effort.
Silviu Cucerzan of Microsoft Research tackled this problem. Well, not the problem of inserting the links, but the general issue of determining what entities are being mentioned in a some piece of text. Fortunately for you, he used Wikipedia articles as his set of entities. His paper, "Large-Scale Named Entity Disambiguation Based on Wikipedia Data", is available on his website. Direct link: pdf.

Resources