Dictionary Component or source code that can check in multiple languages - delphi

We are developing an application in which we need to implement spell checking for Indic languages that use ANSI fonts (not UNICODE)
I am looking for a Dictionary Component or Source Code that will allow:
To maintain separate dictionaries
like for example Legal, commercial,
etc.
Support more than one language
If possible to allow developer to set
parsing parameters so that we as
developers can determine as to how
given text should be broken down in
words
Support Addition of words to
dictionary (should maintain separate
dictionary and not modify original
dictionary)
Support custom dialog box so we can
design our own dialog box (if
required)
Should be able to distinguish case of
characters meaning it should not
consider cascade and Cascade
as same (if possible). There should be
some kind of parameters that will
allow us to enable/disable this feature
If this dictionary can check spellings in another Windows App that would be an added advantage.

As the link in the comment suggest, I would look at Addict component suite and plus pack.
Dictionary wizard provide way of creating specialized dictionaries. There's also APIs "allowing for text parsing, dictionary lookup, text corrections, misspelling suggestions, thesaurus contexts and more."
Addict was written and designed from
the ground up to be as robust and
flexible as possible. Developers have
complete API access to all of Addict's
core features, including main
dictionaries, control parsers, parsing
engine, entities to ignore while
parsing, custom dictionaries,
suggestions generation, thesaurus
file, and much more.

You should split spellchecker core system and your UI.
So, in your application, you should add any dialog boxes and configuration.
Spellchecker core just works with plain text and returns list or errors with suggestions.
There are dozens of open-source spellchecker core implementations. You can even use online services like Google's one (look at Google Wave videos).

Related

How to extract entities from html using natural language processing or other technique

I am trying to parse entities from web pages that contain a time, a place, and a name. I read a little about natural language processing, and entity extraction, but I am not sure if I am heading down the wrong path, so I am asking here.
I haven't started implementing anything yet, so if certain open source libraries are only suitable for a specific language, that is ok.
A lot of times the data would not be found in sentences, but instead in html structures like lists (e.g. 2013-02-01 - Name of Event - Arena Name).
The structure of the webpages will be vastly different (some might use lists, some might put them in a table, etc.).
What topics can I research to learn more about how to achieve this?
Are there any open source libraries that take into account the structure of html when doing entity extraction?
Would extracting these (name, time, place) entities from html be better (or even possible) with machine vision where the CSS styling might make it easier to differentiate important parts (name, time, location) of the unstructured text?
Any guidance on topics/open source projects that I can research would help I think.
Many programming languages have external libraries that generate canonical date-stamps from various formats (e.g. in Java, using the SimpleDateFormat). As you say, the structure of the web-pages will be vastly different, but date can be expressed using a small number of variations only, so writing down the regular expressiongs for a few (let's say, half-a-dozen) formats will enable extraction of dates from most, if not all, HTML pages.
Extraction of places and names is harder, however. This is where natural language processing will have to come in. What you are looking for is a Named Entity Recognition system. One of the best open source NER systems is the Standford NER. Before using, you should check out their online demo. The demo has three classifiers (for English) that you can choose from. For most of my tasks, I find their english.all.3class.distsim classifier to be quite accurate.
Note that an NER performs well when the places and names you extract are occurring in sentences. If they are going to occur in HTML labels, this approach is probably not going to be very helpful.

Most efficient way to translate a Sitecore website to 4 other languages (Not having the translators in you Sitecore CMS)

I am looking for a good way to translate an excisting Sitecore installation (English language is available) to 4 other languages (Russian , Chinese, Portuguese etc.) A dedicated translation company will translate all texts we deliver to the specified languages, but I'm curious on how other companies set this up. I thought about just exporting all Sitecore items which have to be translated using the Database language Export function in Sitecore and having the translation company edit those files. By just replacing the language tags in the XML we should be able to import this file as the newly created other language, however I'm affraid that this XML structure will be totally useless for a translation company and that they will drown in the codes inside this XML. How can we efficiently do this? Is there any other way then just giving those translation people access to the Sitecore environment and having them edit the languages here? Any Shared Source Module to achieve this? I still have alot of questions, is there anyone with some experience in achieving this?
Your primary options are either the language export/import functionality (as you mention), or a workflow-based solution that integrates with your translation agency's Translation Management System (if they have one -- hopefully they do).
The former is better for the initial translation. Typically, your agency should be able to handle translation of content within XML files. A good one can. If you create all needed language versions beforehand and copy english content into them, it will make the files easier to work with as they'll have tags for the new languages in them already. I've seen the creation of these layers done with Revolver (http://www.codeflood.net/revolver/) but could also be done with custom code or workflow.
For ongoing maintenance of your translated content, you'll probably want to integrate through workflow. Clay Tablet Technologies (http://www.clay-tablet.com/) have a middleware component w/ Sitecore integration that can make this easier, depending on your translation agency. You can also do your own workflow-based integration, with workflow commands that allow your users to send content for translation. Then you'd need some sort of listener that pulls the translated content back in, and continues the workflow.
Hope this helps!
You could also check out Lionbrdige (http://en-us.lionbridge.com/sitecore-and-lionbridge-announce-partnership-to-help-companies-thrive-across-borders.htm) as a solution.
From my own experience our customers normally use the Sitecore import/export function as a first step and then use Lionbridge or Clay Tablet as a service.
One important thing to think about with translations is the ongoing work. The initial translation is rather simple, but the second and so on might be more troublesome. What if different changes has been made in different languages. If local changes were made in the content for sat the french version you couldn´t just send the English version (second translate then) since you would also have to accomodate for the regional changes in the content.
Having worked with literally dozens of Sitecore clients worldwide — and helped get content to and from all the largest, and many smaller translation firms —, I can attest to the ineffeciency of trying to do translation in situ, that is in Sitecore. I liken it to asking an electrician to come over and rewire your house, but as they reach for their toolbox from the truck you tell them, "Nope — you need to do it by hand".
The very best way to manage anything more than a page or two of content for translation is to export it seamlessly. Deliver it to the LSP in a proper format (XML or XLIFF) and, when possible, auto import it to their TMS. Once translated, the content should then flow seamlessly back into Sitecore.
You can code this yourself — but the pitfalls are non-trivial just on the Sitecore side. (If you want intuitive UI's, scalability, and all the features that meet the needs of translation). Let alone the challenges of connecting to the systems LSP's use. (For example, who here knows the relative merits/risks of using SLD's Nexus connector versus their CTA for connecting to TMS?)
As kindly mentioned above, there are commecially available solutions that meet all these needs and more. So if you've got even a modest amount of content — and want to send that to any translation provider of your choice — I'd be happy to discuss how we can help.
The main issue with translation isn't technical at all, the XML export is a simple enough format and all agencies should be able to deal with it with no porblems. as others have suggested, maintenance after the initial translation is slightly more problematic but they also point to tools to achieve this.
The main issue we've found with translation is actually linguistic: how to achieve consistency of phrasing and that matches the original but is sufficiently adjusted to local requirements. Translation companies usually have software to aid this - libraries of of the phrases they translate etc. - working with an exported XML file doesn't provide the context of seeing content in situ. A particular item may be translated correctly and the site consistently, but as each page may be built from multiple items there can easily be conflicts between content as presented.
That makes working with the Sitecore backend (maybe with field security settings to limit ) or in the page editor (possibly pre filling fields with English values) a viable idea.

F# type providers, how do they work

I don't quite get type providers after watching Don Symes's pdc video
http://player.microsoftpdc.com/Session/04092962-4ed1-42c6-be07-203d42115274
Do I understand this correctly. You can get ready made type providers for Twitter, Excel...
What if I have a custom Xml structure, do I need to implement my own type provider for that and how is this different from creating my own custom mapper?
Say you have some arbitrary data entity out in the world. For this example, let's say it's a spreadsheet.
Let's also say you have some way to get/infer schema/metadata for that data - that is, you can know types (e.g. double versus string) and relationships (e.g. this column means 'salary') and metadata (e.g. this sheet is for the June 2009 budget).
Type providers lets you code up a kind of 'shim library' that knows about some kind of data entity (e.g. a spreadsheet) and use that library as part of the compiler/IDE toolchain so that you can write code like
mySpreadsheet.ByRowAndColumn.C4
or something, and get Intellisense (autocompletion) and tooltips (e.g. describing cell C4 as Salary for Bob) and static typing (e.g. have it be a double or a string or whatever it is). Essentially this gives you the tooling affordances of statically-typed object models with the ease-of-use leverage of various dynamic or code-generation systems, with some improvements on both. The 'cost' is that someone has to write the shim library (the 'type provider'), but many such providers are very general (e.g. one that speaks OData or Excel or WMI or whatnot) and so a small handful of type provider libraries makes vast quantities of the world's data available in your programming language with static typing and first-class tooling support.
The architecture is an open compiler, where provider-authors implement a small interface that allows them to inject new names/types into the programming context. A type provider might be just another library you pass to the compiler (a reference in your project, -r-ed), with extra metadata that marks it as a type provider that participates in the compilation/IDE/codegen portions of development.
I don't know exactly what a "custom mapper" is in your xml example to draw a comparison.
I understand that this is an old question, but now Type providers are available (as F# 3.0 got released). There is a white paper explaining it too. And we have a code drop from Microsoft that can let you see under the hood.
http://www.infoq.com/news/2012/09/fsharp-type-providers
Type providers use F#'s quotations to act as (effectively) compiler plugins that can generate code based on meta-data at compile time.
This allows you to (for example) read in some JSON, or a data base schema, or some XSD or whatever and then generate F# classes to model the domain that meta-data represents.
In terms of creating them, I wrote a few blog posts that might be of interest starting with Type Providers from the Ground Up.

What is the best way to store multiple language versions of a website?

My web site (on Linux servers) needs to support multiple languages.
What is the best practice to have/store multiple languages versions of the same site?
Some I can think of:
store in DB
different view file for each language
gettex
hard coded words in PHP files (like in phpBB)
With web sites, you really have several categories of content to consider for localization:
The article-type content elements that you would in many cases create, edit and publish in a CMS.
The smaller content blocks that are common to every page (or a sub-group of pages), such as tagline, blurb, text around a contact form, but also imported content such as a news ticker or ads and affiliate links. Some of these may only appear for one language (for example, if you don't offer some services in some regions, or don't have, say, language-appropriate imported content for a particular language: it can be better to remove an element rather than offering English to people who may not speak it).
The purely functional elements, like "Click here to comment", "More...", high-level navigation, etc., which are sometimes part of your template. Some of these may be inside images.
For 1. the main decision is using a CMS or not. If yes, you absolutely need to choose one that supports multiple languages. I'm not up-to-date with recent developments in PHP CMS's, but several of the Django CMS apps (Django-CMS-2, FeinCMS) support multi-language content. Don't forget that date stamps, for example, need to be localized, too (or you can get around this by choosing ISO dates, though that may not always be possible). If you don't use a CMS, and everything is in your HTML files, then gettext is the way to go, and keep the .mo files (and your offline .po files) in folders by language.
For 2. if you have a CMS with good multi-lingual support, get as much as possible inside the CMS. The reason is that these bits do change, and you want to edit your template as little as possible. If you write code yourself, think of ways of exporting all in-CMS strings per language, to hand them to translators. Otherwise, again, gettext. The main issue is that these elements may require hard-coding language-selection code (if $language = X display content1 ...)
For 3., if it's in your template, use gettext. For images, the per-language folders will come in handy, and for heaven's sake make choose images the generation of which can be automated, or you (or your graphic artist) will go mad with editing 100s of custom images with strings in languages you don't understand.
For both 2. and 3., abstracting from the language selection may help selecting the appropriate blocks or content directory (where localized images or .mo files are kept).
What you definitely want to avoid is keeping a pile of HTML files with extensive text content in them that would be a nightmare to maintain.
EDIT: Everything about gettext, .po and .mo files is in the GNU gettext manual (more than you ever wanted to know) or a slightly dated but friendlier tutorial. For PHP, there's are the PHP gettext functions, and also the Zend Locale documentation
I recommend using Zend_Translate's Gettext adapter which parses mo files. Very efficient + caching. Your calls would be like
echo $translation->_("Hello World");
Which would find the locale specific key for that specified string.
Check out i18n support for php: http://php-flp.sourceforge.net/getting_started_english.htm

How do you implement a multiculture web application

I believe several of us have already worked on a project where not only the UI, but also data has to be supported in different languages. Such as - being able to provide and store a translation for what I'm writing here, for instance.
What's more, I also believe several of us have some time-triggered events (such as when expiring membership access) where user location should be taken into account to calculate, like, midnight according to the right time-zone.
Finally there's also the need to support Right to Left user interfaces accoring to certain languages and the use of diferent encodings when reading submitted data files (parsing text and excel data, for instance)
Currently I'm storing all my translations for all my entities on a single table (not so pratical as it is very hard to find yourself when doing sql queries to look into a problem), setting UI translations mainly on satellite assemblies and not supporting neither time zones nor right to left design.
What are your experiences when dealing with these challenges?
[Edit]
I assume most people think that this level of multiculture requirement is just like building a huge project. As a matter of fact if you tihnk about an online survey where:
Answers will collected only until
midnight
Questionnaire definition and part of
the answers come from a text file
(in any language) as well as
translations
Questions and response options must
be displayed in several languages,
according to who is accessing it
Reports also have to be shown and
generated in several different
languages
As one can see, we do not have to go too far in an application to have this kind of requirements.
[Edit2]
Just found out my question is a duplicate
i18n in your projects
The first answer (when ordering by vote) is so compreheensive I have to get at least a part of it implemented someday.
Be very very cautious. From what you say about the i18n features you're trying to implement, I wonder if you're over-reaching.
Notice that the big boy (e.g. eBay, amazon.com, yahoo, bbc) web applications actually deliver separate apps in each language they want to support. Each of these web applications do consume a common core set of services. Don't be surprised if the business needs of two different countries that even speak the same language (e.g. UK & US) are different enough that you do need a separate app for each.
On the other hand, you might need to become like the next amazon.com. It's difficult to deliver a successful web application in one language, much less many. You should not be afraid to favor one user population (say, your Asian-language speakers) over others if this makes sense for your web app's business needs.
Go slow.
Think everything through, then really think about what you're doing again. Bear in mind that the more you add (like Right to Left) the longer your QA cycle will be.
The primary piece to your puzzle will be extensive use of interfaces on the code side, and either one data source that gets passed through a translator to whichever languages need to be supported, or separate data sources for each language.
The time issues can be handled by the interfaces, because presumably you will want things to function in the same fashion, but differ in the implementation details. To a large extent, a similar thought process can be applied to the creation of the interface when adjusting it to support differing languages. When you get down to it, skinning is exactly this, where the content being skinned is the interface, and the look/feel is the implementation.
Do what your users need. For instance, most programmer understand English, there is no sense to translate posts on this site. If many of your users need a translation, add a new table column with the language id, and another column to link a translated row to its original. If your target auditory contains the users from the Middle East, implement Right to Left. If time precision is critical up to an hour, add a time zone column to the user table, and so on.
If you're on *NIX, use gettext. Most languages I've used have some level of support; PHP's is pretty good, for instance.
I'll describe what has been done in my project (it wasn't my original architecture but I liked it anyways)
Providing Translation Support
Text which needs to be translated have been divided into three different categories:
Error text: Like errors which happen deep in the application business layer
UI Text: Text which is shown in the User interface (labels, buttons, grid titles, menus)
User-defined Text: text which needs to be translatable according to the final user's preferences (that is - the user creates a question in a survey and he can also create a translated version of that survey)
For each different cathegory the schema used to provide translation service is different - so that we have:
Error Text: A library with static functions which access resource files
UI Text: A "Helper" class which, linked to the view engine, provides translations from remote assemblies
User-defined Text: A table in the database which provides translations (according to typeID of the translated entity and object id) and is linked to the entity via a 1 x N relationship
I haven't, however, attacked the other obvious problems such as dealing with time zones, different layouts and picture translation (if this is really necessary). Does anyone have tackled this problem in a different way?
Has anyone ever tackled the other i18n problems?

Resources