What is an Illegal octal digit? - ruby-on-rails

I'm trying to make an array of zip codes.
array = [07001, 07920]
This returns :
array = [07001, 07920]
^
from (irb):12
from :0
Never seen this before. Any workarounds?

Ruby is interpreting numbers that have a leading 0 as being in octal (base 8). Thus the digits 8 and 9 are not valid.
It probably makes more sense to store ZIP codes as strings, instead of as numbers (to avoid having to pad with zeroes whenever you display it), as such: array = ["07001", "07920"]

Numbers that start with 0 are assumed to be in octal format, just like numbers that start with 0x are assumed to be in hexadecimal format. Octal digits only go from 0 to 7, so 9 is simply not legal in an octal number.
The easiest workaround would be to simply write the numbers in decimal format: 07001 in octal is the same as 3585 in decimal (I think). Or did you mean to write the numbers in decimal? Then, the easiest workaround is to leave off the leading zeroes: 07001 is the same as 7001 anyway.
However, you mention that you want an array of ZIP codes. In that case, the correct solution would be to use, well, an array of ZIP codes instead of an array of integers, since ZIP codes aren't integers, they are ZIP codes.

Your array is of numbers, so the leading zero causes it to be interpreted as octal (valid digits 0-7). If these are zip codes, and the leading zero is significant, they should probably be strings.

Related

is there a way to convert an integer to be always a 4 digit hex number using Lua

I'm creating a Lua script which will calculate a temperature value then format this value as a 4 digit hex number which must always be 4 digits. Having the answer as a string is fine.
Previously in C I have been able to use
data_hex=string.format('%h04x', -21)
which would return ffeb
however the 'h' string formatter is not available to me in Lua
dropping the 'h' doesn't cater for negative answers i.e
data_hex=string.format('%04x', -21)
print(data_hex)
which returns ffffffeb
data_hex=string.format('%04x', 21)
print(data_hex)
which returns 0015
Is there a convenient and portable equivalent to the 'h' string formatter?
I suggest you try using a bitwise AND to truncate any leading hex digits for the value being printed.
If you have a variable temp that you are going to print then you would use something like data_hex=string.format("%04x",temp & 0xffff) which would remove the leading hex digits leaving only the least significant 4 hex digits.
I like this approach as there is less string manipulation and it is congruent with the actual data type of a signed 16 bit number. Whether reducing string manipulation is a concern would depend on the rate at which the temperature is polled.
For further information on the format function see The String Library article.

Am I comparing hexidecimal values in tcl properly?

I am doing some simple hex comparison in an if statement.
0x7843E0 is greater than 0x780000 but the code below doesn't output anything.
if {"780000" <= "7843E0"} {
puts "True!"
}
>>
Omitting the trailing 0 works fine however.
if {"780000" <= "7843E"} {
puts "True!"
}
>>> True!
There must be something wrong with the trailing 0 but I don't understand what it is. Any ideas?
You're having problems with the way the expr command parses numbers. (The rest of Tcl is more relaxed about this.) The issue is that:
"780000" gets interpreted as a decimal integer.
"7843E0" gets interpreted as a double precision floating point number. (Compare with 1.2e10; the number parser thinks it is fitting the same sort of pattern.)
"780000" gets interpreted as a decimal integer.
"7843E" gets interpreted as a non-numeric string (a fallback because no numeric interpretation is legal).
The <= operator will happily compare two numbers if they're both numbers, or two strings if at least one of the parameters to it is non-numeric. (Yes, this does make for weird semantics occasionally.) Moreover, he expr command is eager to interpret values as numbers if it possibly can, but it still has Tcl's syntactic rules for what actually is numeric, and what type of numeric those things are. When you don't stick to the rules, it gets a bit odd.
To get a value interpreted as hexadecimal, you have to either prefix its string representation with 0x (e.g., 0x7843E0) or force things with a command such as scan with %x:
scan "780000" %x a
scan "7843E0" %x b
if {$a <= $b} {
puts "True"
}
Forcing interpretations with scan is considered to be one of the best ways of dealing with this, as that only writes canonical values into variables. (If you'd been wanting to handle octal numbers, or were wanting to really always be decimal, you'd use %o and %d respectively; %f is for floating-point numbers.)
Finally, if you're really comparing values as strings with normal ASCII-like rules, look at string compare instead of using <= directly.
if {[string compare $input1 $input2] <= 0} {
...
}

Delphi base convert Binary to Decimal

Im converting binary to decimal and Im converting Decimal to binary. My problem is Length of the binary integer. For example:
Convertx("001110",2,10) = 14
Convertx("14",10,2) = 1110
But length of the binary is NOT constant, So How can I get exact original binary with zeros front of it? How can I get "001110" instead of "1110" ?
I m using this function in Delphi 7. -> How can I convert base of decimal string to another base?
The function you are using returns a string that is the shortest length required to express the value you have converted.
Any zeroes in front of that string are simply padding - they do not alter the binary value represented. If you need a string of a minimum length then you need to add this "padding" yourself. e.g. if you want a binary representation of a "byte" (i.e. 8 binary digits) then the minimum length you would need is 8:
binStr := Convertx("14",10,2);
while Length(binStr) < 8 do
binStr := '0' + binStr;
If you need the exact number of zeroes that were included in the "padding" of some original binary value when converting from binary to decimal and then back to "original" binary again, then this is impossible unless you separately record how many padding zeroes there were or the length of the original string, including those zeroes.
i.e. in your example, the ConvertX function has no idea (and now way to figure out) that the number "14" it is asked to convert to binary was originally converted from a 6 digit binary string with 2 leading zeroes, rather than an 8 digit binary with 4 leading zeroes (or a 16 digit binary with 12 leading zeroes, etc etc).
What you are hoping for is impossible. Consider
Convertx('001110', 2, 10)
and
Convertx('1110', 2, 10)
These both return the same output, 14. At that point there is no way to recover the length of the original input.
The way forward is therefore clear. You must remember the length of the original binary, as well as the equivalent decimal. However, once you have reached that conclusion then you might wonder whether there is an even simpler approach. Just remember the original binary value and save yourself having to convert back from decimal.

Under what conditions can [NSEvent characters] be a NSString of length greater than 1?

NSEvent has a characters property which is a NSString valid for key up/down events. Under what conditions can the string length be greater than 1?
The only condition I have been able to find till now is when the NSEvent corresponds to input from an IME (Input Method Editor).
Edit - I knew about the surrogate pair case, but it somehow slipped out of my mind while asking this. I am more interested in the case when the no. of graphemes(characters) is greater than 1 itself.
Under what conditions can the string length be greater than 1?
When you have a keyboard/input method which can input any single character which requires a surrogate pair in UTF-16, e.g. a 𐀀 (Unicode Linear B Syllable B008 A), then the length will be 2. This is because length returns the number of 16-bit code units, not the number of characters.
You can also get this with programmatically-posted events. CGEventKeyboardSetUnicodeString() allows the caller to attach any arbitrary string to the key event.
High unicode codepoints are coded with a character sequence in Mac OS X. Try 𫝑.

Converting HEX to ASCII in Lua Dissector

I'm trying to take HEX bytes and display them as their ASCII values. If someone could point me reasonably firmly in the right direction I'd be obliged. Tried any number of uint-type commands, and working with buffer(x, 2) as an argument.
I'm not sure what you mean by hex bytes, but the relevant functions are:
string.byte, which converts chars to numerical codes
string.char, which converts numerical codes to chars
For a single character in hexadecimal, you can use string.byte as mentioned by lhf. For longer sequences, you can create a loop in Lua, but that is not very efficient since it involves a lot of copying.
Since Wireshark 1.11.3 there is a Struct.fromhex function that converts a string of hexadecimal characters to the binary equivalent.
Example:
-- From hex to bytes (with no separators)
assert(Struct.fromhex("5753") == "WS")
-- From hex to bytes (using a single space as separator)
assert(Struct.fromhex("57 53", " ") == "WS")
Similarly, there is a Struct.tohex function that converts from bytes to hex.

Resources