Can EDI files have ~ in data? - edi

I'm parsing a EDI file and splitting by ~s. I am wondering if it's possible for EDI to have ~ in the data itself? Is there a rule that says no ~ in the data? This is for 810/850 etc

The value defined in the 106th character of the ISA segment (or, alternatively – to be a bit less brittle to whitespace issues – the 1st character after the ISA16 element) is the segment delimiter (in official terms: the segment terminator). Most of the time people specify the ~ character, but other choices are certainly valid.
In this example, the 106th character is ~:
ISA*00* *00* *ZZ*AMAZONDS *01*TESTID *070808*1310*U*00401*000000043*1*T*+~
Instead of counting 106 characters (which, again, can be brittle to whitespace issues), you can count 16 elements – that is, 16 asterisks – to find the value for ISA16 (which is +), and then pick the next character (which is ~).
There are two relevant sections in the official X12 specification (bolded for emphasis):
12.5.4.3 Delimiter Specifications
The delimiters consist of three separators and a terminator. The
delimiters are devised for inclusion within the data stream of the transfer. The delimiters are:
segment terminator [note: this is the one we're discussing]
data element separator
component element separator
repetition separator
The delimiters are assigned by the interchange sender. These characters are disjoint from those of the data elements; if a character is selected for the data element separator, the component element separator, the repetition separator or the segment terminator from those available for the data elements, that character is no longer available during this interchange for use in a data element. The instance of the terminator (<tr>) must be different from the instance of the data element separator (<gs>), the component element separator (<us>) and the repetition separator (<rs>). The data element separator, component element separator and repetition separator must not have the same character assignment.
So, according to this part of the spec, if the ~ is used as the segment terminator, then the use of the ~ is disallowed in a data element (that is, the textual body).
Now, let's look at section 12.5.A.5 – Recommendations for the Delimiters:
Delimiter characters must be chosen with care, after consideration of data content, limitations of the transmission protocol(s) used, and applicable industry conventions. In the absence of other guidelines, the following recommendations are offered:
<tr> terminator: ~ | Note: the "~" was chosen for its infrequency of use in textual data.
This section is saying that ~ was chosen as the default because ~ is seldom found in textual data (it would have been a bad idea, for example, to use . as the default, since that's such a common inclusion).
That said, even though using the segment terminator is technically prohibited, it's still possible for an EDI transmission to inadvertently include ~ in the textual data – in other words, your trading partner may include this by accident. Further, the BIN and BSD (binary data) segments can certainly include ~ (though these may not apply based on the transaction sets you're working with).
In our parsing API, we apply a set of specific set of patterns based on the type of segment we encounter. For us, it's not sufficient to split naively based on the segment delimiter alone because we may encounter binary segments (BIN, BSD), where it's possible that the segment delimiter character is included in the textual data.
For a regular segment (i.e. not BIN or BSD), the logic is something like this:
Consume the segment code (i.e. the characters before the first element delimiter).
Consume each element of the segment based on the element delimiter.
Stop if the next character is a segment delimiter or a new line.
As an example, for segment BEG*PO-00001**20210901~, the process would look like:
Consume BEG. Since this is not a special segment (BIN or BSD), consume elements by splitting on *.
Consume PO-00001.
Consume ''.
Consume 20210901.
Stop since next char is ~.
(The pattern for binary segments is different from the pattern we use for regular segments.)
Here's an example of how our parser "fails" on a ~ in the textual data when the ISA16 segment delimiter is also ~; the JSON representation is particularly helpful for seeing the issue.
Here's an example of our parser succeeding on a ~ in the textual data when the ISA16 segment delimiter is ^.
Lastly, here's an example of our parsing succeeding where the ~ is specified in ISA16, but has been omitted altogether in favor of newlines – which we see occasionally.
Hope this helps.

Related

Antlr differentiating a newline from a \n

Let's say I have the following statement:
SELECT "hi\n
there";
Notice there is a literal newline in there, and the escape \n. The string that antlr4 picks up for me is:
String_Literal: "hi\n\nthere"
In other words, not differentiating between the literal newline and the \n one. Is there a way to differentiate the two, or what's the usual process to do that?
My guess is that the output you pasted into your question comes from a call to the Antlr4 runtime method tree.toStringTree(parser) (or equivalent in whatever target language you've chosen).
That function calls escapeWhitespace in the utilities class/module/file, and that function does what it's name suggests: it converts (some) whitespace characters to C-like backslash escape sequences. (Specifically, it handles newline, carriage return, and tab characters.) It does not escape backslash characters, which makes its output ambiguous; there's no way to distinguish between the two character escape sequence \n and the escaped conversion of a newline character in the message.
They are different in the actual character string, because the Antlr4 lexer does not transform the string value of the matched token in any way. That's your responsibility.
In computing, it is very often the case that what you see is not what you got. What you see is just what you see, and a lot of computational power has gone into creating that vision for you. By the same token, nothing guarantees that the vision is an unambiguous, or even useful, representation of the actual values. The best you can say for it is that it's probably more useful than trying to read the data as individual bits. (And, indeed, the individual bits are not physical objects either; despite the common refrain, you could completely disassemble a computer and examine it with an arbitrarily powerful microscope, and you will not see a single 1 or 0.)
That might seem like irrelevant philosophizing, but it has a real consequence: when you're debugging and you see something that makes you think, "that looks wrong", you need to consider two possibilities: maybe the underlying data is incorrect, but may it's the process which rendered the representation which is at fault. In this case, I'd say that the failure of escapeWhitespace to convert backslash characters into pairs of backslashes is a bug, but that's a value judgement on my part. Anyway, the function is not critical to the operation of Antlr4, and you could easily replace it.

Can tab characters appear in an iso-8859-8 file?

I have a file that I believe to be in the ISO-8859-8 format. However, it has tabs in it, which doesn't seem to appear in this character set:
https://en.wikipedia.org/wiki/ISO/IEC_8859-8
Does this mean that the file isn't in the ISO-8859-8 format after all? Can ISO-8859-8 encoded characters be combined with tabs?
Yes.
The tab (\t) character is one of the standard C0 control codes, along with Null (\0), Bell/Alert (\a), Backspace (\b), Line Feed (\n), Vertical Tab (\v), Form Feed (\f), Carriage Return (\r), Escape (\x1B), etc.
According to Wikipedia's page on ISO/IEC 8859:
The ISO/IEC 8859 standard parts only define printable characters, although they explicitly set apart the byte ranges 0x00–1F and 0x7F–9F as "combinations that do not represent graphic characters" (i.e. which are reserved for use as control characters) in accordance with ISO/IEC 4873; they were designed to be used in conjunction with a separate standard defining the control functions associated with these bytes, such as ISO 6429 or ISO 6630. To this end a series of encodings registered with the IANA add the C0 control set (control characters mapped to bytes 0 to 31) from ISO 646 and the C1 control set (control characters mapped to bytes 128 to 159) from ISO 6429, resulting in full 8-bit character maps with most, if not all, bytes assigned. These sets have ISO-8859-n as their preferred MIME name or, in cases where a preferred MIME name is not specified, their canonical name. Many people use the terms ISO/IEC 8859-n and ISO-8859-n interchangeably.
IOW, even though the official character chart only lists the printable characters, the C0 control characters, including Tab, are for all practical purposes part of the ISO-8859-n encodings.
Your linked article even explicitly says so.
ISO-8859-8 is the IANA preferred charset name for this standard when supplemented with the C0 and C1 control codes from ISO/IEC 6429.
An "ISO 8859-8 file" is interpreted usually as a file which contain standard C0, C1, and DEL. So you can use control characters without problems.
But technically, ISO 8859-8 is just defining the characters as show in Wikipedia. Remember that files were not so relevant in such times, but the transmission of data between different systems (you may have the transmitted data stored as native file, so transcoded, but so the concept of "file encoding" was not so important). So we have ISO 2022 and ISO 4873 which defines the basic idea of encodings, to transmit data, and they define the "ANSI escape sequences". With such sequences you can redefine how the C0, C1, and the two letter blocks (G0, G1) are used.
So your system may decide to use ASCII for initial communication, then switch C0 for better control between the systems, and then load G0 and G1 with ISO 8859-8, so you can transmit your text file (and then maybe an other encoding, for a second stream of data in an other language).
So, technically, Wikipedia tables are correct, but now we use to share files without transcoding them, and so without changing encoding with ANSI escape characters, and so we use "ISO 8859-8 file" as a way to describe a ISO 8859-8 graphical characters (G0 and G1), and we allow we extra control characters (usually TAB, NL (and CR, LF), sometime also NUL, VT). This is also embedded in the string iso-8859-8 used by IANA, and so web browsers and email. But note: usually you cannot use all C0 and C1 control codes (some are forbidden by standards, and some should not be used (usually) in files, e.g. ANSI escape sequences, NUL bytes and so may be misinterpreted, or discarded (and possibly this will give a security problem).
In short: ISO 8859-8 technically do not define control codes. But usually we allow some of them in files (TAB is one of them). Check the file protocol to know which control codes are allowed (please no BEL, and ANSI escape characters)

GREP: can you have 2 positive lookahead token/argument pairs in a legal GREP query?

I'm having trouble finding a sequence that works it InDesign to move full stops (US English: periods) before footnote(s) when they trail the superscripted endnote references to a word/phrase.
so make thisfoo<sup>23,25<sup/>. into thatfoo.<sup>23,25<sup/> ( tags not literally there just indicating to you the reader these are numbers in superscript but Markdown doesn't do superscript I think)
because my positive look behind is not working I'm look to use a sequence of two or morepositive lookbehind` tokens, but is this inside the rules?
I wrote a GREP token that hits all the endnotes references, whatever combination of spaces, commas and digits. But I can't replace with found in InDesign because it breaks all the hyperlinks to the endnotes. So I need to use positive lookahead and positive lookbehind to move the full stops. First remove the existing then add the new one before the endnotes. But the same token, say this one of many possible to pick up any of
{n, n, n…} —> \d[\d\, \,]+ (and I add '\.' to catch the period) will not get a single hit as an argument for a positive-lookbehind token
i.e. (?<=\d[\d\, \,]+)\. doesn't get a hit. tried various variations too. and lookahead. What about what ID calls "unmarking subexpressions" which Text Wrangler I think refers to as Perl-Style Pattern Expressions?
I can use negative lookbehind to find periods following digits+ i.e. (?<![a-zA-Z])\. but it won't give me the entire endnotes references sequence to mark and put a period preceding it.
This GREP is all executed within Adobe InDesign layout software, so no command line execution. It's okay if I use two operations not all done with one Find/Change operation. First add preceding period. Second remove trailing period.
i want to remove the period char at green arrow and add one at the red arrow for any given series of endnote reference numbers and commas. The central problem is that found hits on endnote strings CANNOT be used in the Change To token as found strings because that will remove their (hidden) indexing as Cross-References linking them to Endnotes which will result in hyperlink connections in exported PDF (amongst other reasons). (ignore the Find token in the screenshot)

How many chars can numeric EDIFACT data elements be long?

In EDIFACT there are numeric data elements, specified e.g. as format n..5 -- we want to store those fields in a database table (with alphanumeric fields, so we can check them). How long must the db-fields be, so we can for sure store every possible valid value? I know it's at least two additional chars (for decimal point (or comma or whatever) and possibly a leading minus sign).
We are building our tables after the UN/EDIFACT standard we use in our message, not the specific guide involved, so we want to be able to store everything matching that standard. But documentation on the numeric data elements isn't really straightforward (or at least I could not find that part).
Thanks for any help
I finally found the information on the UNECE web site in the documentation on UN/EDIFACT rules Part 4. UN/EDIFACT rules Chapter 2.2 Syntax Rules . They don't say it directly, but when you put all the parts together, you get it. See TOC-entry 10: REPRESENTATION OF NUMERIC DATA ELEMENT VALUES.
Here's what it basically says:
10.1: Decimal Mark
Decimal mark must be transmitted (if needed) as specified in UNA (comma or point, put always one character). It shall not be counted as a character of the value when computing the maximum field length of a data element.
10.2: Triad Seperator
Triad separators shall not be used in interchange.
10.3: Sign
[...] If a value is to be indicated to be negative, it shall in transmission be immediately preceded by a minus sign e.g. -112. The minus sign shall not be counted as a character of the value when computing the maximum field length of a data element. However, allowance has to be made for the character in transmission and reception.
To put it together:
Other than the digits themselves there are only two (optional) chars allowed in a numeric field: the decimal seperator and a minus sign (no blanks are permitted in between any of the characters). These two extra chars are not counted against the maximum length of the value in the field.
So the maximum number of characters in a numeric field is the maximal length of the numeric field plus 2. If you want your database to be able to store every syntactically correct value transmitted in a field specified as n..17, your column would have to be 19 chars long (something like varchar(19)). Every EDIFACT-message that has a value longer than 19 chars in a field specified as n..17 does not need to be stored in the DB for semantic checking, because it is already syntactically wrong and can be rejected.
I used EDI Notepad from Liaison to solve a similar challenge. https://liaison.com/products/integrate/edi/edi-notepad
I recommend anyone looking at EDI to at least get their free (express) version of EDI Notepad.
The "high end" version (EDI Notepad Productivity Suite) of their product comes with a "Dictionary Viewer" tool that you can export the min / max lengths of the elements, as well as type. You can export the document to HTML from the Viewer tool. It would also handle ANSI X12 too.

When to use the terms "delimiter," "terminator," and "separator"

What are the semantics behind usage of the words "delimiter," "terminator," and "separator"? For example, I believe that a terminator would occur after each token and a separator between each token. Is a delimiter the same as either of these, or are they simply forms of a delimiter?
SO has all three as tags, yet they are not synonyms of each other. Is this because they are all truly different?
A delimiter denotes the limits of something, where it starts and where it ends. For example:
"this is a string"
has two delimiters, both of which happen to be the double-quote character. The delimiters indicate what's part of the thing, and what is not.
A separator distinguishes two things in a sequence:
one, two
1\t2
code(); // comment
The role of a separator is to demarcate two distinct entities so that they can be distinguished. (Note that I say "two" because in computer science we're generally talking about processing a linear sequence of characters).
A terminator indicates the end of a sequence. In a CSV, you could think of the newline as terminating the record on one line, or as separating one record from the next.
Token boundaries are often denoted by a change in syntax classes:
foo()
would likely be tokenised as word(foo), lparen, rparen - there aren't any explicit delimiters between the tokens, but a tokenizer would recognise the change in grammar classes between alpha and punctuation characters.
The categories aren't completely distinct. For example:
[red, green, blue]
could (depending on your syntax) be a list of three items; the brackets delimit the list and the right-bracket terminates the list and marks the end of the blue token.
As for SO's use of those terms as tags, they're just that: tags to indicate the topic of a question. There isn't a single unified controlled vocabulary for tags; anyone with enough karma can add a new tag. Enough differences in terminology exist that you could never have a single controlled tag vocabulary across all of the topics that SO covers.
Technically a delimiter goes between things, perhaps in order to tell you where one field ends and another begins, such as in a comma-separated-value (CSV) file.
A terminator goes at the end of something, terminating the line/input/whatever.
A separator can be a delimiter or anything else that separates things. Consider the spaces between words in the English language for example.
You could argue that a newline character is a line terminator, a delimiter of lines or something that separates two lines. For this reason there are a few different newline-type characters in the Unicode specification.
A delimiter is one or two markers that show the start and end of something. They're needed because we don't know how long that 'something' will be. We can have either: 1. a single delimiter, or 2. a pair of pair-delimiters
[a, b, c, d, e] each comma (,) is a single delimiter. The left and right brackets, ([, ]) are pair-delimiters.
"hello", the two quote symbols (") are pair-delimiters
A seperator is a synonym of a "delimiter", but from my experience it usually refers to field delimiters. A field delimiter acts as a divider between one field and the one following it, which is why is can be though of as "separating" them.
<file1>␜<file2>␜<file3>, the file separator character (␜), despite explicitly the name having "separator", is both a delimiter and a separator
A terminator marks the end of a group of things, again needed because we don't know how long it is.
abdefa\0, here the null character \0 is a terminator that tells us the string has ended.
foo\n, here the newline character \n is a terminator that tells us the line has ended.
The terms, delimiter, separator originate from the classical idea of storage, conceptually, being comprised of files, records, and fields, (a file has many records, a record has many fields). In this context, a single delimiter and pair-delimiters might be called record delimiters and field delimiters. Because of the historical significance of files-records-field taxonomy, this terms have a more widespread usage (see Wikipedia page for Delimiter).
Below are two files, each with three records with each record having four fields:
martin,rodgers,33,28000\n
timothy,byrd,22,25000\n
marion,summers,35,37000\n
===
lucille,rowe,28,33000\n
whitney,turner,24,19000\n
fernando,simpson,35,40900\n
Here, , and \n as we know are single delimiters, but they might also be called a record delimiters and field delimiters respectively.
For complex nested structures, a terminator can also be a delimiter/separator (they're not mutually exclusive definitions). From the previous example, the === marker from inside a file could be considered a terminator (it's the end of the file). But when we look at many files, the === acts like a delimiter/separator.
Consider lines in a UNIX file
This is line 1\n
This is line 2\n
This is line 3\n
The newlines are both terminators (they tell us where the string ends) and are delimiters (they tell us where each line begins and ends). From Wikipedia:
Two ways to view newlines, both of which are self-consistent, are that newlines either separate lines or that they terminate lines.
Really you'll only need to say "terminator" when you're talking at one individual item, (just one string 1234\0, just one line abcd\n, etc.) -- and it'll be unclear whether the terminator in this context could also be a delimiter in a more complex parent structure.
This response is in context of CSV because all of the provided answers focus on English language instead.
Delimiters are all elements mentioned in the given CSV specification that describe the boundaries of stuff, separator is a common name for field delimiters, terminator is a common name for record delimiters.
Delimiter is a part of CSV format specification, it defines boundaries and doesn't have to be a printable character.
Terminators, separators and field qualifiers are delimiters but are not necessary to specify a CSV format, e.g. 10 columns field delimiter and 30 columns record delimiter mean each 30 columns are one record and each 10 columns are one field (usually padded with white space). In other words CSV format without separators has a constant field and record length, e.g.:
will smith 1 chris rock 0
Terminator is a delimiter that marks the end of a single CSV record and is usually represented either by Line Feed (LF), a Carriage Return (CR) or a combination of both (e.g. CRLF), e.g.:
will smith 1
chris rock 0
Separator is a delimiter that marks the division between CSV fields and is most often represented by a comma (or a semicolon), it has been introduced to store dynamic length values, e.g. two comma separated records in CSV format with CRLF terminator after 1 and 0:
will,smith,1
chris,rock,0
Field qualifier is a delimiter usually used in pairs instead of escape sequence. It is a printable character that isn't allowed in the field value (unless given CSV format specification provides the escape sequence) and marks the beginning and the end of a field, it was introduced to store values containing separators, e.g. this CSV has 2 records with 3 fields each but 3rd field value can contain a semicolon that otherwise acts as a fields separator:
will;smith;"rich;famous;slaps people"
chris;rock;"rich;famous;gets slapped"
Escape sequence is a character (or a set of characters) that marks anything that follows the escape sequence as non-significant and therefore as a part of the field value (e.g. backslash might specify the immediately following separator as a part of the value). This sequence can escape one or multiple characters, e.g. CSV with \ as a 1 character escape sequence:
will;smith;rich\;famous\;slaps people 100\\100% of time
chris;rock;rich\;famous\;slaps people 0\\100% of time
Delimiter
There are a couple of senses for delimiter:
As the space used in sentences (frontier).
A delimiter is like a frontier, it exists between countries.
In that sense, there must be two countries to have a frontier.
An space usually exists between words, but not at the end. The space delimits words but does not terminate sentences (collection of words). The sentence:
This is a short sentence.
Has four spaces, they act as word delimiters. There is no ending space.
In fact, there are two additional delimiters usually not named: The start and end of the sentence. Like the ^ and $ used in regular expressions to mark the start and end of an string of text.
And, in human language, there are punctuation marks (dot, comma, semicolon, colon, etc.) that serve also as word delimiters (additionally to spaces)
As used in quotes (boundary).
A sentence like:
“This is a short sentence.”
Is delimited (start and end) by the double quotes (“”). In this sense it is like "balanced delimiters" (Balanced Brackets in Wikipedia).
Some may argue the frontier and boundary are essentially the same, and, under some conditions they actually are correct.
Separator
Is exactly the same as the first sense (above) of a delimiter (a frontier).
So, a separator is a synonym of delimiter in many computer uses.
Terminator
Demarcate the end of an individual "field".
Like the newlines in a Unix text file. Each line is terminated by a NewLine (\n).
In a proper Unix text file all lines are terminated (even the last one).
Like paragraphs are terminated by a newline in human language.
Or, more strictly, as the NUL (\0) is the terminator of a C string:
A string is defined as a contiguous sequence of code units terminated by the first zero code unit (often called the NUL code unit).
So, a terminator character is also a delimiter but must also appear at the end.
Tags
Stackoverflow has tags only for delimiters and separators
delimiterA delimiter is a sequence of one or more characters used to specify the boundary between separate, independent regions in plain text or other data streams.
separatorA character that separates parts of a string.
The terminator tag only apply to a shell terminal emulator:
terminatorTerminator is a GPL terminal emulator.
And, yes, delimiter and separator are many times equivalent
except for the parenthesis, braces, square brackets and similar balanced delimiters.
Interesting question and answers. To summarize, 1) delimiter marks the "limits" of something, i.e. beginning and/or end; 2) terminator is just a special term for "end delimiter"; 3) separator entails there are items on both sides of it (unlike delimiter).
Best example I can think of for a start delimiter is the start-comment markers in programming languages ("#", "//", etc.).
Best example I can think of for a terminator (end delimiter) is the newline character in Unix. It's a misnomer -- it always terminates a (possibly empty) line but doesn't always start a new line, i.e. when it is the last character in a file. Maybe a better common example is the simple period for sentences.
Best example I can think of for a separator is the simple comma. Note that comma never appears in English without text both before and after it.
Interesting to note that none of these is necessarily limited to single-character. In fact awk (or maybe only gawk?) in Unix allows FS (field separator) to be any regexp.
Also, although "any non-zero amount of whitespace" is considered a "word delimiter" in e.g. the wc command, there are also zero-width "word boundary" specifiers in regexps (e.g. \b). Interesting to ponder whether such zero-width items/boundaries could be considered "delimiters" as well. I tend to think not (too much of a stretch).
Terminators are separators when you start with empty. A;B;C; is actually A;B;C;empty.
Just like the English language, there is the technically correct answer, and the generally used answer, and it is probably relevant to isolate to the programming usage of the term definitions being sought.
The industry has long used the phrase 'Comma Delimited' file to mean:
FirstRowFirstValue,FirstRowSecondValue,FirstRowThirdValue
SecondRowFirstValue,SecondRowSecondValue,SecondRowThirdValue
TECHNICALLY, this is a Comma 'SEPARATED' list.
TECHNICALLY, THIS is a Comma 'DELIMITED' list.
,FirstRowFirstValue,FirstRowSecondValue,FirstRowThirdValue,
,SecondRowFirstValue,SecondRowSecondValue,SecondRowThirdValue,
or this:
,FirstRowFirstValue,,FirstRowSecondValue,,FirstRowThirdValue,
,SecondRowFirstValue,,SecondRowSecondValue,,SecondRowThirdValue,
and nobody does that. Ever.
And the industry standard is to use 'TEXT QUALIFIER' for the TECHNICAL definition of a 'DELIMITER' where (") is the 'TEXT QUALIFIER' and (,) is called the 'DELIMITER'.
FirstRowFirstValue,"First Row Second Value",FirstRowThirdValue
SecondRowFirstValue,SecondRowSecondValue,SecondRowThirdValue
Adding to the answer here already, I've use the term notator.
Annotation is a super set of notation.
A notator is the super set of delimiter.
A delimiter is the super set of terminator and separator.
Annotation is all notation and markup used in a particular document. For example, a "TODO List" document must be a line separated list of strings.
Notation is markup used to denote specific meaning. For example, "string are in quotes" is a notation.
A delimiter is the character or set of characters used to denote a notation. For example, the character quote is the delimiter for strings.
A terminator is ending delimiter and prefix is the starting delimiter. For the "TODO List" document, quote may be used as the prefix and terminating delimiter.
A seperator is a delimiter that separates two things. For example, "new line" is the separator for each "TODO List" item. In this example, "new line" is also a terminator; a new line may be used to terminate each line. A separator also being a terminator is typical, but not guaranteed to always be the case.
Delimiters can also be "positional". A positionally delimited example is a column delimited mainframe flat file.
"word 1", "word 2" \NULL
The words are delimited by quotes,
separated by the comma,
and the whole thing is terminated by \NULL.

Resources