Machine Learning - Software to learn file formats by example - machine-learning

My program can read several dozen file formats, using the traditional approach where I write procedural code for each file format. Most of these formats have their own unique loader library, their own bugs, their own limitations, and the whole thing is a huge time sink for me. I'd like to support a ton of other formats, but they're mostly not worth my time because they're not popular enough.
I'd like to replace my existing loaders with a single loader powered by a file format descriptor. I'm certain that someone has created software to learn file formats by example. My existing loaders would make excellent fitness functions for those formats, and I can write fitness functions for new formats too.
My question is, what software can I use to "learn" file formats by example, and how can I convert that "learning" into a descriptor for use with a generic loader?

Unless you limit it in some massive ways, I don't think you're likely to get very far. This would be ideal but beyond the current state of the art. For an arbitrary formats, you cannot do this, for example if I give you 200 JPGs,PNGs,BMPs and GIFs it very highly unlikely that a learning system can learn the formats.
Here are some problems researchers have looked at:
Learning a regular expression from examples: look at this question:
Is it possible for a computer to "learn" a regular expression by user-provided examples?,
for example
Information extraction: I give you a list of classified ads from the
newspaper, for example apartments for rent. You need to extract the
number of bedrooms, the rent, the deposit and the size of the unit.
You can read more about it here:
http://en.wikipedia.org/wiki/Information_extraction

Related

Print contents of rpg file in human-readable format

Context
A friend of mine is having trouble printing source code to a human readable format.
The compiled (I assume) programs of their welding robot have the .rpg extension. They want to collect print-outs in human-readable format, possibly for backup or future reference.
Their supplier can provide the software that accomplishes this, be it at a considerable cost (and possibly: an annual license). Because of this, my friend decided to ask me if a easier/cheaper solution exists.
Examples & Pictures
The files can be read on the console of the robot, an example:
I've done some minor research and I'm fairly sure this is the Report Program Generator (RPG) language developed by IBM. The Assembly-like syntax seems to match; it might be one of the later versions of the language.
My friend has send me an example .rpg file, the contents seem binary with some string literals scattered throughout. Screenshot of the contents of an example file in hexadecimal:
The Question
There is not much, if any, clear information to be found online so I suppose I have multiple questions (for anyone that might know more about this):
Is this (first image) Report Program Generator (RPG) code?
Does the .rpg file contain compiled or processed code? Maybe an intermediate format?
Is it possible to convert files as shown in the example, back to source-code or human-readable format, kind of 'disassemble' it?
If anyone knows more, don't hesitate to give me any information or ask more details if necessary. Thanks in advance!
And maybe not an important question but still something that bugs me (and might indicate I'm on the wrong track):
If this is indeed an RPG program, why would the compiled/processed binary have the .rpg extension, shouldn't the source-file have that? This leads me to believe I'm either (a) assuming the wrong things (the language, etc...) or (b) this is an intermediate format, easier for machines to read, that has to be interpreted by some kind of runtime system.
I don't think that's any version of IBM's RPG language. RPG does have a MOVEL opcode, but it doesn't have any of the others.
Also, all the versions of the IBM language have been intended for business programming. I doubt that it would have been used for robotics.
My guess is that's a proprietary language of the company that makes the robot.
There are some similarities but it does not look like IBM RPG language.
RPG sources are in fact source physical file members. They are not stored in the "traditional" file system but in OS/400 libraries. Therefore RPG sources have no extension. They can be converted to Integrated File System stream file though.
I can't answer this question I'm afraid as it's unknown language to me.
I expect possibly that the OP misidentifies the file type/extension; that the extension is actually .prg, and the files serve as instructions for a Panasonic Industrial Welding Robot. The following forum [drilled down to Panasonic Robots] bills itself as the biggest Industrial Robots Supportforum worldwide!; perhaps a good place to ask about those images provided in the OP, and the inquiry about getting source from what appears to be a binary instruction stream.
FWiW, the first image seems to show that the Ezed utility [on the console] gives that human-readable format, so then the question might be how to get that saved and then how to transfer that elsewhere; e.g. what type of comm ports and file transfer utilities are available from whatever platform/OS.

Is there a Way to localize an Application on Various Platforms

We are developing an Application which runs on various plattforms (Windows, Windows RT, MacOSX, iOS, Android).
The Problem is how to manage the different localizations on the different Platforms in an Easy Way. The Language Files on the different platforms have various formats (some are xml based, others are simple key-value pairs and others are totally crazy formats like on MacOS)
I'm sure, we aren't the first company with this problem, but I wasn't able to find an easy to use solution o achive the possibility to have one "datasource" where the strings are collected in different languages (the best would be an User Interface for the translators) and then can export it to the different formats for the different platforms.
Does anybody has a solution for this problem?
Greetings
Alexander
I recommend using GNU Gettext toolchain for management and at runtime use either
some alternate implementation for runtime reading like Boost.Locale,
own implementation (the .mo format is pretty trivial) or
use Translate Toolkit to convert the message catalogs to some other format of your liking.
You can't use the libintl component of GNU Gettext, because it is licensed under LGPL and terms of both Apple AppStore and Windows Live Store are incompatible with that license. But it is really trivial to reimplement the bit you need at runtime.
The Translate Toolkit actually reimplements all or most of GNU Gettext and supports many additional localization formats, but the Gettext .po format has most free tools for it (e.g. poedit for local editing and Weblate for online editing) so I recommend sticking with it anyway. And read the GNU Gettext manual, it describes the intended process and rationale behind it well.
I have quite good experience with the toolchain. The Translate Toolkit is easy to script when you need some special processing like extracting translatable strings from your custom resource files and Weblate is easy to use for your translators, especially when you rely on business partners and testers in various countries for most translations like we do.
Translate Toolkit also supports extracting translatable strings from HTML, so the same process can be used for translating your web site.
I did a project for iPhone and Android which had many translations and I think I have exactly the solution you're looking for.
The way I solved it was to put all translation texts in an Excel spreadsheet and use a VBA macro to generate the .string and .xml translation files from there. You can download my example Excel sheet plus VBA macro here:
http://members.home.nl/bas.de.reuver/files/multilanguage.zip
Just recently I've also added preliminary Visual Studio .resx output, although that's untested.
edit:
btw also my javascript xcode/eclipse converter might be of use..
you can store your translations on https://l10n.ws and get it via they API
Disclaimer: I am the CTO and Co-Founder at Tethras, but will try to answer this in a way that is not just "Use our service".
As loldop points out above, you really need to normalize your content across all platforms if you want to have a one-stop solution for managing your localized content. This can be a lot of work, and would require much coding and scripting and calling of various tools from the different SDKs to arrive at a common format that would service the localization needs of all the various file formats you need to support. The length and complexity of my previous sentence is inversely proportional to the amount of work you would need to do to arrive at a favorable solution for all of this.
At Tethras, we have built a platform that alleviates the need for multi-platform software publishers to have to do this. We support all of the native formats from the platforms you list above, and can leverage translations from one file format to another. For example, translate the content in Localizable.strings from your iOS app into a number of languages, then upload your equivalent strings.xml file from Android or foo.resx from Windows RT to the system, and it will leverage translations for you automatically. Any untranslated strings will be flagged and you can order updates for these strings.
In effect, Tethras is a CMS for localized content across many different native files formats.

Profanity checking for promotional codes

I have a slightly unusual profanity-related question.
Now we're used to dealing with profanity-filtering of user-generated content — any method is imperfect, but products like CleanSpeak and WebPurify do a good-enough job.
The problem we have at the moment, though, is that we've been building an engine to run promotional-code–based competitions, that will be used internationally. We could do with checking that none of these codes is profane in Latin American Spanish or Malay (at least in the first instance), to make sure we don't send out a code that's equivalent to FUCK23 or PEN15 or something.
We've tried Googling around and asking people we know, but we can't find an easy way of getting hold of an es-419 or an ms profanity list to filter the codes against. As there are literally millions of codes per locale, we'd rather do an offline check than hit an API for each code (which would be expensive both in terms of bandwidth and usage fees).
I know this is a bit of a long shot, but does anyone know of a good source for profanity lists in different languages?
#disclaim: We know that no profanity filtering is perfect, that it's essentially futile with user-generated content and we have read SO #273516: How do you implement a good profanity filter? — that's not what we're asking.
Building or finding lists in other languages is extremely time consuming and difficult (trust me, we've built many of them at Inversoft). You might be better off tweaking the code generators instead (from what I could tell your code is generating the promotional codes rather than humans).
The best way to tweak a generator is to ensure that the codes can't easily form words based on the general use of consonants and vowels in most European languages. Things get a bit dicey in Polish and others, but it usually works.
Generally, most codes that start with a vowel are followed by another vowel or a non-joining consonant (like 'q' without a 'u'). If the code starts with a consonant then the next character is the same consonant or one that has a low probability of being used. For example, if you start with 's' then adding 'g' is a good choice.
You could also use wiktionary or other similar sources (like Linux dictionary files) to build a statistical approach to this. By extracting the probability of characters being next to each other, you should be able to generate codes with good accuracy of never being words in any language.
However, if I misread your question and you aren't generating the codes programmatically, you can ignore my response completely. :)
I have had the same thoughts. in trying to generate 6 character codes for a project i am doing.
I decided to reduce the likelyhood of obvious porfain codes So i removed the vowels that i found in as many "bad" words as i could think of, from my intial base 36 generation code. Leaving me with something more like a base 28 system that did not include a,e,i,o,u, 1,0. the one and zero were removed to reduce confusion between those characters in some fonts with I,L,O's
so far I have not seen a "profain" code genreated. Although base 28 has 1.something billion unique combinations.
i cannot vouch for other languages, and had not even considered it...

Open-source OCR package that can handle unknown characters?

I want to find a (preferably) open-source OCR package (for any OS) that is capable of handling a new character set.
The language is Latin, but with some scribal abbreviations, about 10 different abbreviations that aren't in Unicode.
The text has been printed using specially-developed fonts, and I have high-res images of the text.
I'm assuming some training is going to be needed, first to map the scribal abbreviations to ASCII, and then presumably corpus-specific training for the software to learn where the abbreviations tend to appear within words.
Could anyone recommend a (preferably) open-source package capable of handling this?
AFAIK there is no library (free or commercial) that can be used as-is for what you describe (a language with characters not representable by Unicode) ... BUT as a good starting point there is an opensource OCR called Tesseract which you could take and modify for your special scenario... another interesting base could be OCROpus... but beware: this will mean lots of work.

Which is the best import / export LaTeX tool?

Working in academia publishing CS/math, you sooner or later find yourself trying to publish in a journal that will only accept .doc/.rtf. This means tedious, boring hours of translating line after line, especially equations, from LaTeX to an inferior format. Over the years I have tried a number of export tools for LaTeX, but none, at least of the free ones, that I have been very satisfied with. I'd like this page to collect and monitor the best import/export tools for LaTeX, to .doc/.rtf, or to other useful (e.g. HTML, MATHML) formats.
Thus, what is your one favorite import or export LaTeX tool?
AFAIK there isn't really a convenient and effective way to achieve what you're trying to do. What I usually do in those rare occasions is that I export to pdf, then select all the text, and paste into word. It's horrible and messes things up and of course doesn't adjust your citations.
To this day I don't understand how people writing in scientific fields can write and publish in Word. It is common in some human-computer interaction literature but I have not seen it in other conferences and journals. May I ask which one it is?
Also, some places, once you've already been accepted, will be willing to accept a PDF if you push it with them. You may have to make little adjustment yourself. Negotiations sometimes work on this.
The UK TeX FAQ has been collecting answers on this for quite some time now. :)
See Conversion from (La)TeX to HTML and Other conversions to and from (La)TeX. There is another FAQ specifically about Converters between LaTeX and PC Textprocessors maintained by Wilfried Hennings.
For LaTeX to HTML there are LaTeX2HTML, TtH, Tex4ht, TeXpider and Hevea; in my experience TeX4ht is the best. For LaTeX to Word, you can go through RTF with TeX2RTF (not so good), or through Adobe Acrobat which can produce PDF that Word can read (not good either), or go through HTML as above, but best is to use tex4ht which can generate OpenOffice ODT format, from which conversion to Word is easy.
The UK TeX FAQ also has many other useful things; you should take a look.

Resources