Specification for vda formats - edi

I'm student and I have to make project which include EDI converter (mine of course). To be honest I have to use ODETTE format and VDA. Does anybody know where I can find any specification for this formats?
Any help would be great !:)

ODETTE is edifact. see http://www.unece.org/trade/untdid/welcome.html
all UN edifact messagges are there.
they have versions (like D 96A etc); so you have to be beware of the version of the message you use.
VDA is used in German automitive.
This is their website: http://www.vda.de/de/index.html
look eg at http://www.vda.de/de/publikationen/publikationen_downloads/index.html?aid=1
They have more than one format: edifact, csv, fixed elngth records.
Yes, you can build your own translator.
Take a look at Bots open source edi translator (http://bots.sourceforge.net),
you might get some idea's ;-))

Related

Need to modify XLIFF XSD

I am converting DITA to XLIFF. In my technical solution I have to specialize (modify) xliff-core-1.2-strict.xsd to accommodate few DITA attributes. It means some attributes will go along with "g" tag. For Example:
<g id="00001" newAtt="this is new attribute" xid="009"/>
From the translation side I am not sure how it will work, so my questions:
Is it general practice that LSPs will get different flavor of XLIFF XSDs from different companies?
And is it possible for them to use it in their XLIFF editor by updating updated XLIFF XSD? I tried to explore “Transolution” but did not find any place where to place modified XSD.
Please let me know if you have any thought on this.
Thanks.
Here are my answers to your two questions:
I work for an LSP and I've seen all sorts and flavors of Xliff. Most of them try to stick to OASIS 1.2 transitional schema. Some of TMS/CAT producers added their own extensions. These producers normally provide an XSD so you can validate their Xliffs by adding that XSD to OASIS schema; e.g. SDL extensions to 1.2. When I'm customizing Xliff for a client, I normally do namespaces and provide a simple additional XSD; e.g.:
<trans-unit id="0" translate="yes" resname="msg_foo">
<superduper:uri>http://foo.bar/iJKLM9</superduper:uri>
<source>This is supposed to be a <superduper:g id="00001" newAtt="this is new attribute" xid="009"/> example.</source>
<target state="new"></target>
</trans-unit>
Most of the TMS/CAT tools are very basic (and closed) when it comes to their Xliff filters (or any of their filters for that matter) and I'm sorta kinda sure that they ignore your customized XSD.
Transolution is a very nice tool and my favorite Open Source translation tool. Unfortunately it's been long abandoned and has plenty of defects and shortcomings.
Anyway, if you provide a sample file, I can tell you what happens to non-conforming tags when it's imported into one of common, major CAT tools.
One final note; <g> seems to be retired in Xliff 2.0.
Forget about Transolution for one thing. I would try it with Trados, MemoQ or MemSource. Those are some of the big tools used in the translation business.
Personally I never got an XLIFF with a XSD, nor do I think Trados can handle it.
What do you need to change in the XLIFF?

how to read data from dat file?

how to read data from .dat files ?
i just tried like this memo1.lines.loadfromfile('c:\myfile.dat'); but not worked
Note : File type is binary
can any one please help me :)
#radick to show the contents of an binary file in a memo control you must encode o convert the data to valid ASCII characters, to turn it all into text. because you can not load something that is not text into a text control.
you can find a very nice sample from Peter Below in this link.
read a binary file and display the byte values as ASCII?
(source: swissdelphicenter.ch)
Use the TStream descendants from the VCL Classes unit to read binary files.
There are plenty Delphi TStream reading binary files examples you can find using Google.
--jeroen
You might look at this post as they seem to be discussing this very thing.

Alternatives to gettext?

Are there any general localization/translation alternatives to gettext?
Open source or proprietary doesn't matter.
When I say alternative to gettext, I mean a library for internationalization, with a localization backend of sorts.
The reason I'm asking is because (among other things) I find the way gettext does things slightly cumbersome and static, mostly in the backend bit.
First of all I think gettext is one of the best at this point.
You may take a look on Boost.Locale that may provide a better API and use gettext's dictionary model: http://cppcms.sourceforge.net/boost_locale/docs/ (not official part of Boost, still beta).
Edit:
If you don't like gettext...
These are translation technologies:
OASIS XLIFF
GNU gettext po/mo files
POSIX catalogs
Qt ts/tm files
Java properties,
Windows resources.
Now:
Last two total crap... Very hard to use translate and maintain, do not support plural forms.
Qt ts/tm -- requires usage of Qt framework. Have very similar model to gettext. Not bad solution, but limited to Qt. Not so useful in generic programs.
POSIX catalogs -- nobody uses them, no plural forms support. Crap.
OASIX XLIFF -- "standard" solution, depends on XML, even ICU requires compilation to specific ICU resources for use. Limited translation tools, I don't know any library that supports XLIFF. Plural forms not so easy to use (ICU included some support only in 4.x release).
Now what do we have?
GNU gettext, widely used, has great tools, has great plural forms support, very popular in translators community...
So decide, do you really think that gettext is not so good solution?
I don't think so. You haven't worked with other solutions at all, so try to understand how it works at first place.
Fluent is a new system that offers a number of adaptations that gettext lacks. Where gettext supports pluralization, fluent has a generic framework for the text variants. Where gettext uses the "untranslated" string as its translation key, fluent supports an abstract key (allowing multiple translations for something that just happens to be homonymous in the source language. Here is a more extensive comparison.
An example of fluent .ftl file, taken from firefox's preferences codebase, looks like this:
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
blocklist-window =
.title = Block Lists
.style = width: 55em
blocklist-description = Choose the list { -brand-short-name } uses to block online trackers. Lists provided by <a data-l10n-name="disconnect-link" title="Disconnect">Disconnect</a>.
blocklist-close-key =
.key = w
blocklist-treehead-list =
.label = List
blocklist-button-cancel =
.label = Cancel
.accesskey = C
blocklist-button-ok =
.label = Save Changes
.accesskey = S
# This template constructs the name of the block list in the block lists dialog.
# It combines the list name and description.
# e.g. "Standard (Recommended). This list does a pretty good job."
#
# Variables:
# $listName {string, "Standard (Recommended)."} - List name.
# $description {string, "This list does a pretty good job."} - Description of the list.
blocklist-item-list-template = { $listName } { $description }
blocklist-item-moz-std-listName = Level 1 block list (Recommended).
blocklist-item-moz-std-description = Allows some trackers so fewer websites break.
blocklist-item-moz-full-listName = Level 2 block list.
blocklist-item-moz-full-description = Blocks all detected trackers. Some websites or content may not load properly.
Interesting comments about gettext() and all those pro-gettext().
I'm not saying that it ain't working right in most cases, but I tried to manage one project with it and quickly felt overwhelm by the hardness of using it. Maybe there are a few user interfaces for translators today, but I did not even look. The extraction and merging of strings is just not doing it for me.
Now, I thank you Artyom for talking about XLIFF which is a much better solution for my environment since everything is XML. Oh! And there are excellent editors out there. But if you like gettext() you won't find them. 8-)
I'll suggest looking at this one for example:
https://sourceforge.net/projects/wordforge2/
Now, this may give the programmer a nightmare to make it all work, but what we want is a dream come true for the translators (and zero work by the programmer as translations pour in, because I can tell you that with gettext() I had to do all the work!)
There is an alternative from Zend that supports gettext *.po / *.mo files and many more formats.
Many Apache servers cache the translation files because gettext is implemented as a module and the server has to be restarted to refresh the translation data.
The Zend implementation avoids this and supports many more formats:
http://framework.zend.com/manual/1.12/en/zend.translate.html

How does acrobat encode annotations added as sticky notes to pdfs?

We have been reading and writing Sticky Notes/Annotations/Comments to pdfs via an activex control in our application for a number of years. We have recently upgraded to Delphi2009 with Unicode Support. The following is causing problems.
When we call
CAcroPDAnnot.GetContents
The results seem to be rather strange and we lose our Unicode Chars. It is not like saving as an ansi string which would usually result in returning ????? instead we get a string such as
‚És‚­“ú‚É•—Ž×‚ð‚Ђ¢‚½‚ç
For a string of Japanese characters.
However if I save the comments in the pdf to a datafile via the menu in the pdf itself it is written to file as something like
0kˆL0Oeå0k˜¨ª0’0r0D0_0‰
The latter can be export and reimported into an acrobat pdf and will recreate the correct unicode characters. However once I call CAcroPDAnnot.GetContents in my code it is coming back as something else.
Is CAcroPDAnnot.GetContents broken?
Is there an encoding scheme I should be aware of?
Is there an alternative I might be able to do?
Thanks
‚És‚­“ú‚É•—Ž×‚ð‚Ђ¢‚½‚ç
That's the string:
に行く日に風邪をひいたら
in CP-932 aka Shift-JIS encoding, an awful but lamentably still-popular encoding in Japan.
You're currently interpreting it in as CP-1252 (Windows Western European). If your PDF-reading component won't convert it for you automatically, you'll need to find a way to detect what encoding the document is in and convert it manually.
I don't know what Delphi provides for reading encodings, but have you got the encodings for Shift-JIS installed in Windows, from the Control Panel -> Regional Options -> "Install files for East Asian languages" option? If not, that might explain why it'd be failing to convert automatically, perhaps.
You're not exactly giving us a lot of information to work with.
I take it you're talking about the "Acrobat.CAcroPDAnnot" class' method GetContents here. Which version of Acrobat are you using? Have you perhaps switched versions (or run an update) around the time you started programming with Delphi 2009?
Then: how did you instantiate the object? If using a *_TLB.pas file generated from the DLL, are you certain it still matches it? (Try re-generating it, if uncertain).
Third: how are you calling the method? What type of variable are you assigning the result to?
What might also help, is if you could provide a sample of an annotation (preferably including non-ASCII chars); and for that annotation:
what it should look like (and what it does look like inside Reader)
what it returns when using a pre-2009 version of Delphi*
what it returns when using Delphi 2009*
(* preferably the HEX byte codes of the (ansi/wide)strings; but output from the Ctrl-F7 inspector should do)
Then maybe someone could provide a more meaningful answer.
Ok, one of the main differences between Delphi 2009 and the earlier versions is that the default string type is an unicode string. That means that if you use the same ActiveX component as in previous versions, you are passing unicode strings to ascii strings and that is usually not a good idea.
There are a couple of solutions for this problem:
Try if you can upgrade your activeX component so that it supports full unicode strings.
Use AnsiString and not string to communicate with the activeX component. In this case, you can still use the old interface, but you are still bound to the same limitations.
Use an other control that creates pdf. There is a lot to find, but be prepared to change a big chunk of your software. (Some controls are XML based and use encoding. )

Adding MS-Word-like comments in LaTeX

I need a way to add text comments in "Word style" to a Latex document. I don't mean to comment the source code of the document. What I want is a way to add corrections, suggestions, etc. to the document, so that they don't interrupt the text flow, but that would still make it easy for everyone to know, which part of the sentence they are related to. They should also "disappear" when compiling the document for printing.
At first, I thought about writing a new command, that would just forward the input to \marginpar{}, and when compiling for printing would just make the definition empty. The problem is you have no guarantee where the comments will appear and you will not be able to distinguish them from the other marginpars.
Any idea?
todonotes is another package that makes nice looking callouts. You can see a number of examples in the documentation.
Since LaTeX is a text format, if you want to show someone the differences in a way that they can use them (and cherry pick from them) use the standard diff tool (e.g., diff -u orig.tex new.tex > docdiffs). This is the best way to annotate something like LaTeX documents, and can be easily used by anyone involved in the production of a document from LaTeX sources. You can then use standard LaTeX comments in your patch to explain the changes, and they can be very easily integrated. If the document lives in a version control system of some sort, just use the VCS to generate a patch file that can be reviewed.
I have used changes.sty, which gives basic change colouring:
\added{new text}
\deleted{old text}
\replaced{new text}{old text}
All of these take an optional parameter with the initials of the author who did this change. This results in different colours used, and these initials are displayed superscripted after the changed text.
\replaced[MI]{new text}{old text}
You can hide the change marks by giving the option final to the changes package.
This is very basic, and comments are not supported, but it might help.
My little home-rolled "fixme" tool uses \marginpar where possible and goes inline in places (like captions) where that is hard to arrange. This works out because I don't often use margin paragraphs for other things. This does mean you can't finalize the layout until everything is fixed, but I don't feel much pain from that...
Other than that I heartily agree with Michael about using standard tools and version control.
See also:
Tips for collaboratively editing a LaTeX document (which addresses you main question...)
https://stackoverflow.com/questions/193298/best-practices-in-latex
and a self-plug:
How do I get Emacs to fill sentences, but not paragraphs?
You could also try the trackchanges package.
You can use the changebar package to highlight areas of text that have been affected.
If you don't want to do the markup manually (which can be tedious and interrupt the flow of editing) the neat latexdiff utility will take a diff of your document and produce a version of it with markup added to visually display the changes between the two versions in the typeset output.
This would be my preferred solution, although I haven't tested it out on large, multi-file documents.
The best package I know is Easy Review that provides the commenting functionality into LaTeX environment. For example, you can use the following simple commands such as \add{NEW TEXT}, \remove{OLD TEXT}, \replace{OLD TEXT}{NEW TEXT}, \comment{TEXT}{COMMENT}, \highlight{TEXT}, and \alert{TEXT}.
Some examples can be found here.
The todonotes package looks great, but if that proves too cumbersome to use, a simple solution is just to use footnotes (e.g. in red to separate them from regular footnotes).
Package trackchanges.sty works exactly the way changes.sty. See #Svante's reply.
It has easy to remember commands and you can change how edits will appear after compiling the document. You can also hide the edits for printing.

Resources