Implementation of x12 274 NM1 Provider record - edi

I'm looking at page 62 of the spec and see
• Syntax item P0809 (“If either NM108 or NM109 is present, then the other is required.”)
• The DIAGRAM section (and following text on page 63) indicates that both NM108 and NM109 are Required (not Situational) fields
Which seems to read:
Since NM108 or NM109 are Required, the fields must be present even if either is not valued
Syntax item P0809 placed the additional rule that if either are valued, but must be valued or both must be not valued
By these rules; these NM1 records are valid
NM1✽1P✽1✽MARTIN✽NANCY✽T✽✽✽FI✽123456789~
NM1✽1P✽1✽MARTIN✽NANCY✽T✽✽✽✽~
Where are these are not value
NM1✽1P✽1✽MARTIN✽NANCY✽T✽✽✽FI✽~
(“If either NM108 or NM109 is present, then the other is required.”)
NM1✽1P✽1✽MARTIN✽NANCY✽T✽✽✽✽123456789~
(“If either NM108 or NM109 is present, then the other is required.”)
NM1✽1P✽1✽MARTIN✽NANCY✽T~
(NM108 and NM109 are Required fields)
I have a "discussion" going on with a consultant that
NM1✽1P✽1✽MARTIN✽NANCY✽T~
is right if NM108 or NM109 are not valued
I do not think this is right but I have this as a defect right now. How should this logic go?

The pseudo code is as follows
ValidP08P09 := Not ( IsEmpty(NM108) XOR IsEmpty(NM109) )

Related

Lua length operator (#) with nil values

After reading this topic and after experimenting a bit, I am trying to understand how the Lua length operator works when a table contains nil values.
Before I started to investigate, I thought that the length was simply the number of consecutive non-nil elements, starting at index 1:
print(#{nil}) -- 0
print(#{"o"}) -- 1
print(#{"o",nil}) -- 1
print(#{"o","o"}) -- 2
print(#{"o","o",nil}) -- 2
That looks pretty simple, right?
But my headache started when I accidentally added an element after a nil-terminated table:
print(#{"o",nil,"o"})
My guess was that it should probably print 1 because it would stop counting when the first nil is found. Or maybe it should print 2 if the length operator is greedy enough to look for non-nil elements after the first nil. But the above code prints 3.
So I’ve ran several other tests to see what happens:
-- nil before the end
print(#{nil,"o"}) -- 2
print(#{nil,"o","o"}) -- 3
print(#{"o",nil,"o"}) -- 3
-- several nil elements
print(#{"o",nil,nil}) -- 1
print(#{nil,"o",nil}) -- 0
print(#{nil,nil,"o"}) -- 3
I should mention that repl.it currently uses Lua 5.1.5 which is rather old, but if you test with the Lua demo, which currently uses Lua 5.3.5, you’ll get the same results.
By looking at those results and by looking at this answer, I assume that:
if the last element is not nil, the length operator returns the full size of the table, including nil entries if any
if the last element is nil, it counts the number of consecutive non-nil and stops counting at the first nil
Are those assumptions correct?
Can we predict a 100% well-defined behavior when a table contains one or several nil values?
The Lua documentation states that the length of a table is only defined if the table is a sequence. Does that mean that the length operator has undefined behavior for non-sequences?
Apart from the length operator, can nil values cause any trouble in a table?
We can predict some behaviour, but it is not standardised, and as such you should never rely on it. It's quite possible that the behaviour may change within this major version of Lua.
Should you ever need to fill a table with nil values, I suggest wrapping the table and replace holes with a unique placeholder value (eg. NIL={}; if v==nil then t[k]=NIL end, this is quite cheap to test against and safe.).
That said...
As there is even a difference in the result of # depending on how the table is defined, you'll have to distinguish between statically defined (constant) tables and dynamic defined (muted) tables.
Static table definitions:
#{nil,nil,nil,nil,nil, 1} -- 6
#{3, 2, nil, 1} -- 4
#{nil,nil,nil, 1, 1,nil} -- 0
#{nil,nil, 1, 1, 1,nil} -- 5
#{nil, 1, 1, 1, 1,nil} -- 5
#{nil,nil,nil,nil, 1,nil} -- 0
#{nil,nil, 1,nil, 1,nil,nil} -- 5
#{nil,nil,nil, 1,nil,nil, 1,nil} -- 4
Using this kind of definition, as long as the last value is non-nil, you will get a length equal to the position of the last value. If the last value is nil, Lua starts a (non-linear) search from the tail until it finds the first non-nil value.
Dynamic data definition
local x={}; x[5]=1;print(#x) -- 0
local x={}; x[1]=1;x[2]=1;x[3]=1;x[5]=1;print(#x) -- 3
local x={}; x[1]=1;x[2]=1;x[4]=1;x[5]=1;print(#x) -- 5
#{[5]=1} -- 0
local x={nil,nil,nil,1};x[5]=1;print(#x) -- 0
As soon as the table was changed once, the operator works the other way (that includes static definitions with []). If the first element is nil, # always returns 0, but if not it starts a search that I did not investigate further (I guess you can check the sources, though I don't think it's a standard binary search), until it finds a nil value that is preceded by a non-nil value.
As said before, relying on this behaviour is not a good idea, and invites lots of issues down the road. Though if you want to make a nasty unmaintainable program to mess with a colleague, that's a sure way to do it.
When a table is a sequence (all numeric keys start at 1 and there are no nil gaps), # is defined to be precisely the count of those elements.
For non-sequence tables, it is a bit more complicated. Lua 5.2 seems to leave the result as undefined. For 5.1 and 5.3, the result of the operation is a border.
A border in a table is any positive index that contains a non-nil value followed by nil, or 0 if the first element is nil. # is defined to return any value that satifies these conditions.
Looking at it from another perspective, since tables contain an "array" part and a "map" part, Lua has no way of knowing where the "map" indices start. For example, you can create a table with 1000 values and then set the first 999 of them to nil; that could leave you with a table of "size" 1000. However, you can also start with an empty table and set the 1000th element, having a table of "size" 0 but still structurally equivalent to the first one. The result of # is then simply the first valid value the internal algorithm finds.
The length operator produces undefined behaviour for tables that aren't sequences (i.e. tables with nil elements in the middle of the array). This means that even if the Lua implementation always behaves in a certain way, you shouldn't rely on that behaviour, as it may change in future versions of Lua, or in different implementations like LuaJIT.
You can use nils in tables - there is nothing wrong with that - just don't use the length operator on a table which might have nils before non-nil values.
The post you linked to contains more details about how the actual algorithm works. It mentions counting elements with a "binsearch", i.e. a binary search. This is not the same as just counting the elements one by one - if there are nils in the table, then depending on their exact position, the binary search algorithm may treat them as the end of the table, or may just ignore them.
To sum up, the algorithm is harder to predict than you were assuming, and even though it is technically possible to predict what will happen in any given case, you shouldn't rely on that behaviour as it is liable to change.

Other ways to call/eval dynamic strings in Lua?

I am working with a third party device which has some implementation of Lua, and communicates in BACnet. The documentation is pretty janky, not providing any sort of help for any more advanced programming ideas. It's simply, "This is how you set variables...". So, I am trying to just figure it out, and hoping you all can help.
I need to set a long list of variables to certain values. I have a userdata 'ME', with a bunch of variables named MVXX (e.g. - MV21, MV98, MV56, etc).
(This is all kind of background for BACnet.) Variables in BACnet all have 17 'priorities', i.e., every BACnet variable is actually a sort of list of 17 values, with priority 16 being the default. So, typically, if I were to say ME.MV12 = 23, that would set MV12's priority-16 to the desired value of 23.
However, I need to set priority 17. I can do this in the provided Lua implementation, by saying ME.MV12_PV[17] = 23. I can set any of the priorities I want by indexing that PV. (Corollaries - what is PV? What is the underscore? How do I get to these objects? Or are they just interpreted from Lua to some function in C on the backend?)
All this being said, I need to make that variable name dynamic, so that i can set whichever value I need to set, based on some other code. I have made several attempts.
This tells me the object(MV12_PV[17]) does not exist:
x = 12
ME["MV" .. x .. "_PV[17]"] = 23
But this works fine, setting priority 16 to 23:
x = 12
ME["MV" .. x] = 23
I was trying to attempt some sort of what I think is called an evaluation, or eval. But, this just prints out function followed by some random 8 digit number:
x = 12
test = assert(loadstring("MV" .. x .. "_PV[17] = 23"))
print(test)
Any help? Apologies if I am unclear - tbh, I am so far behind the 8-ball I am pretty much grabbing at straws.
Underscores can be part of Lua identifiers (variable and function names). They are just part of the variable name (like letters are) and aren't a special Lua operator like [ and ] are.
In the expression ME.MV12_PV[17] we have ME being an object with a bunch of fields, ME.MV12_PV being an array stored in the "MV12_PV" field of that object and ME.MV12_PV[17] is the 17th slot in that array.
If you want to access fields dynamically, the thing to know is that accessing a field with dot notation in Lua is equivalent to using bracket notation and passing in the field name as a string:
-- The following are all equivalent:
x.foo
x["foo"]
local fieldname = "foo"
x[fieldname]
So in your case you might want to try doing something like this:
local n = 12
ME["MV"..n.."_PV"][17] = 23
BACnet "Commmandable" Objects (e.g. Binary Output, Analog Output, and o[tionally Binary Value, Analog Value and a handful of others) actually have 16 priorities (1-16). The "17th" you are referring to may be the "Relinquish Default", a value that is used if all 16 priorities are set to NULL or "Relinquished".
Perhaps your system will allow you to write to a BACnet Property called "Relinquish Default".

trying witness null: result of operation might violate subset type constraint

I've written a class that represents a binary relation on a set, S, with two fields: that set, S, and a second set of pairs of values drawn from S. The class defines a bunch of properties of relations, such as being single-valued (i.e., being a function, as defined in an "isFunction()" predicate). After the class definition I try to define some subset types. One is meant to define a subtype of these relations that are also actually "functions". It's not working, and it's a bit hard to decode the resulting error codes. Note that the Valid() and isFunction() predicates do declare "reads this;". Any ideas on where I should be looking? Is it that Dafny can't tell that the subset type is inhabited? Is there way to convince it that it is?
type func<T> = f: binRelOnS<T> | f.Valid() && f.isFunction()
[Dafny VSCode] trying witness null: result of operation might violate subset type constraint for 'binRelOnS'
Subset types and non-emptiness
A subset type definition of the form
type MySubset = x: BaseType | RHS(x)
introduces MySubset as a type that stands for those values x of type BaseType that satisfy the boolean expression RHS(x). Since every type in Dafny must be nonempty, there is a proof obligation to show that the type you declared has some member. Dafny may find some candidate values and will try to see if any one of them satisfies RHS. If the candidates don't, you get an error message like the one you saw. Sometimes, the error message will tell you which candidate values Dafny tried.
In your case, the only candidate value that Dafny tried is the value null. As James points out, the value null doesn't even get to first base, because the BaseType in your example is a type of non-null references. If you change binRelOnS<T> to binRelOnS?<T>, then null stands a chance of being a possible witness for showing your subset type to be nonempty.
User-supplied witnesses
Since Dafny is not too clever about coming up with candidate witnesses, you may have to supply one yourself. You do this by adding a witness clause at the end of the declaration. For example:
type OddInt = x: int | x % 2 == 1 witness 73
Since 73 satisfies the RHS constraint x % 2 == 1, Dafny accepts this type. In some programs, it can happen that the witness you have in mind is available only in ghost code. You can then write ghost witness instead of witness, which allows the subsequent expression to be ghost. A ghost witness can be used to convince the Dafny verifier that a type is nonempty, but it does not help the Dafny compiler in initializing variables of that type, so you will still need to initialize such variables yourself.
Using a witness clause, you could attempt to supply your own witness using your original definition of the subset type func. However, a witness clause takes an expression, not a statement, which means you are not able to use new. If you don't care about compiling your program and you're willing to trust yourself about the existence of the witness, you can declare a body-less function that promises to return a suitable witness:
type MySubset = x: BaseType | RHS(x) ghost witness MySubsetWitness()
function MySubsetWitness(): BaseType
ensures RHS(MySubsetWitness())
You'll either need ghost witness or function method. The function MySubsetWitness will forever be left without a body, so there's room for you to make a mistake about some value satisfying RHS.
The witness clause was introduced in Dafny version 2.0.0. The 2.0.0 release notes mention this, but apparently don't give much of an explanation. If you want to see more examples of witness, search for that keyword in the Dafny test suite.
Subset types of classes
In your example, if you change the base type to be a possibly-null reference type:
type func<T> = f: binRelOnS?<T> | f.Valid() && f.isFunction()
then your next problem will be that the RHS dereferences f. You can fix this by weakening your subset-type constraint as follows:
type func<T> = f: binRelOnS?<T> | f != null ==> f.Valid() && f.isFunction()
Now comes the part that may be a deal breaker. Subset types cannot depend on mutable state. This is because types are a very static notion (unlike specifications, which often depend on the state). It would be a disaster if a value could satisfy a type one moment and then, after some state change in the program, not satisfy the type. (Indeed, almost all type systems with subset/refinement/dependent types are for functional languages.) So, if your Valid or isFunction predicates have a reads clause, then you cannot define func in the way you have hoped. But, as long as both Valid and isFunction depend only on the values of const fields in the class, then no reads clause is needed and you are all set.
Rustan

MVC jqGrid Small Integer Error

This article is really great and awesome. This is from the topic "ASP.NET MVC 2.0 Implementation of searching in jqgrid" - ASP.NET MVC 2.0 Implementation of searching in jqgrid.
But right now I was facing searching problem when I added a field with small integer data type. The field added with data type of small integer will serve as Status.
Lets say the value one(1) is for Active and value two(2) is for Inactive.
When I type 1 or 2 from the text box it was throwing an error of
System.Data.Entity: The argument types 'Edm.Int16' and 'Edm.String' are incompatible for this operation. Near equals expression, line 6, column 12.
Thank you in advance.
I am glad that my old answer was helpful for you. I suppose you have problem near the line
// TODO: Extend to other data types
In the code which I included in the answer I shown that propertyInfo.PropertyType.FullName hold the information about the datatype of the entity's property. In the code I used only two types: string and 32-bit integer. In case of 32-bit integers I made the corresponding data parsing with respect of Int32.Parse:
String.Compare (propertyInfo.PropertyType.FullName,
"System.Int32", StringComparison.Ordinal) == 0 ?
new ObjectParameter ("p" + iParam, Int32.Parse(rule.data)
You should replace " ? : " operator with case where you test propertyInfo.PropertyType.FullName for more data types. For example in case of smallint SQL type you should use System.Int16, in case of tinyint it should be System.Byte, The sbyte SQL datatype corresponds to System.SByte and so on. If you would use as the second parameter of ObjectParameter correct datatype all should work correctly

Can a SHA-1 hash be all-zeroes?

Is there any input that SHA-1 will compute to a hex value of fourty-zeros, i.e. "0000000000000000000000000000000000000000"?
Yes, it's just incredibly unlikely. I.e. one in 2^160, or 0.00000000000000000000000000000000000000000000006842277657836021%.
Also, becuase SHA1 is cryptographically strong, it would also be computationally unfeasible (at least with current computer technology -- all bets are off for emergent technologies such as quantum computing) to find out what data would result in an all-zero hash until it occurred in practice. If you really must use the "0" hash as a sentinel be sure to include an appropriate assertion (that you did not just hash input data to your "zero" hash sentinel) that survives into production. It is a failure condition your code will permanently need to check for. WARNING: Your code will permanently be broken if it does.
Depending on your situation (if your logic can cope with handling the empty string as a special case in order to forbid it from input) you could use the SHA1 hash ('da39a3ee5e6b4b0d3255bfef95601890afd80709') of the empty string. Also possible is using the hash for any string not in your input domain such as sha1('a') if your input has numeric-only as an invariant. If the input is preprocessed to add any regular decoration then a hash of something without the decoration would work as well (eg: sha1('abc') if your inputs like 'foo' are decorated with quotes to something like '"foo"').
I don't think so.
There is no easy way to show why it's not possible. If there was, then this would itself be the basis of an algorithm to find collisions.
Longer analysis:
The preprocessing makes sure that there is always at least one 1 bit in the input.
The loop over w[i] will leave the original stream alone, so there is at least one 1 bit in the input (words 0 to 15). Even with clever design of the bit patterns, at least some of the values from 0 to 15 must be non-zero since the loop doesn't affect them.
Note: leftrotate is circular, so no 1 bits will get lost.
In the main loop, it's easy to see that the factor k is never zero, so temp can't be zero for the reason that all operands on the right hand side are zero (k never is).
This leaves us with the question whether you can create a bit pattern for which (a leftrotate 5) + f + e + k + w[i] returns 0 by overflowing the sum. For this, we need to find values for w[i] such that w[i] = 0 - ((a leftrotate 5) + f + e + k)
This is possible for the first 16 values of w[i] since you have full control over them. But the words 16 to 79 are again created by xoring the first 16 values.
So the next step could be to unroll the loops and create a system of linear equations. I'll leave that as an exercise to the reader ;-) The system is interesting since we have a loop that creates additional equations until we end up with a stable result.
Basically, the algorithm was chosen in such a way that you can create individual 0 words by selecting input patterns but these effects are countered by xoring the input patterns to create the 64 other inputs.
Just an example: To make temp 0, we have
a = h0 = 0x67452301
f = (b and c) or ((not b) and d)
= (h1 and h2) or ((not h1) and h3)
= (0xEFCDAB89 & 0x98BADCFE) | (~0x98BADCFE & 0x10325476)
= 0x98badcfe
e = 0xC3D2E1F0
k = 0x5A827999
which gives us w[0] = 0x9fb498b3, etc. This value is then used in the words 16, 19, 22, 24-25, 27-28, 30-79.
Word 1, similarly, is used in words 1, 17, 20, 23, 25-26, 28-29, 31-79.
As you can see, there is a lot of overlap. If you calculate the input value that would give you a 0 result, that value influences at last 32 other input values.
The post by Aaron is incorrect. It is getting hung up on the internals of the SHA1 computation while ignoring what happens at the end of the round function.
Specifically, see the pseudo-code from Wikipedia. At the end of the round, the following computation is done:
h0 = h0 + a
h1 = h1 + b
h2 = h2 + c
h3 = h3 + d
h4 = h4 + e
So an all 0 output can happen if h0 == -a, h1 == -b, h2 == -c, h3 == -d, and h4 == -e going into this last section, where the computations are mod 2^32.
To answer your question: nobody knows whether there exists an input that produces all zero outputs, but cryptographers expect that there are based upon the simple argument provided by daf.
Without any knowledge of SHA-1 internals, I don't see why any particular value should be impossible (unless explicitly stated in the description of the algorithm). An all-zero value is no more or less probable than any other specific value.
Contrary to all of the current answers here, nobody knows that. There's a big difference between a probability estimation and a proof.
But you can safely assume it won't happen. In fact, you can safely assume that just about ANY value won't be the result (assuming it wasn't obtained through some SHA-1-like procedures). You can assume this as long as SHA-1 is secure (it actually isn't anymore, at least theoretically).
People doesn't seem realize just how improbable it is (if all humanity focused all of it's current resources on finding a zero hash by bruteforcing, it would take about xxx... ages of the current universe to crack it).
If you know the function is safe, it's not wrong to assume it won't happen. That may change in the future, so assume some malicious inputs could give that value (e.g. don't erase user's HDD if you find a zero hash).
If anyone still thinks it's not "clean" or something, I can tell you that nothing is guaranteed in the real world, because of quantum mechanics. You assume you can't walk through a solid wall just because of an insanely low probability.
[I'm done with this site... My first answer here, I tried to write a nice answer, but all I see is a bunch of downvoting morons who are wrong and can't even tell the reason why are they doing it. Your community really disappointed me. I'll still use this site, but only passively]
Contrary to all answers here, the answer is simply No.
The hash value always contains bits set to 1.

Resources