Getting the size in bytes of an arbitrary integer - erlang

Given an integer, 98749287 say, is there some built-in/libray function, either Erlang or Elixir, for getting the size in bytes?
To clarify, the minimum number of bytes used to represent the number in binary.
Seems simple, and have written a function using the "division by base" method and then counting bits, but after some hrs of searching docs havent found anything for what would seem useful to have.

If you have an unsigned integer, you can use the following snippet:
byte_size(binary:encode_unsigned(Integer))
Example:
1> byte_size(binary:encode_unsigned(3)).
1
2> byte_size(binary:encode_unsigned(256)).
2
3> byte_size(binary:encode_unsigned(98749287)).
4

Try this expression:
Value = (... your input ...),
NumBytes = size(integer_to_binary(Value, 2) + 7) div 8.
Reference: http://www.erlang.org/doc/man/erlang.html#integer_to_binary-2

Related

How to generate a 32 bit big-endian number in the format 0x00000001 in erlang

I need to generate a variable which has the following properties -
32 bit, big-endian integer, initialized with 0x00000001 (I'm going to increment that number one by one). Is there a syntax in erlang for this?
In Erlang, normally you'd keep such numbers as plain integers inside the program:
X = 1.
or equivalently, if you want to use a hexadecimal literal:
X = 16#00000001.
And when it's time to convert the number to a binary representation in order to send it somewhere else, use bit syntax:
<<X:32/big>>
This returns a binary containing four bytes:
<<0,0,0,1>>
(That's a 32-bit big-endian integer. In fact, big-endian is the default, so you could just write <<X:32>>. <<X:64/little>> would be a 64-bit little-endian integer.)
On the other hand, if you just want to print the number in 0x00000001 format, use io:format with this format specifier:
io:format("0x~8.16.0b~n", [X]).
The 8 tells it to use a field width of 8 characters, the 16 tells it to use radix 16 (i.e. hexadecimal), and the 0 is the padding character, used for filling the number up to the field width.
Note that incrementing a variable works differently in Erlang compared to other languages. Once a variable has been assigned a value, you can't change it, so you'd end up making a recursive call, passing the new value as an argument to the function. This answer has an example.
According to the documentation[1] the following snippet should generate a 32-bit signed integer in little endian.
1> I = 258.
258
2> B = <<I:4/little-signed-integer-unit:8>>.
<<2,1,0,0>>
And the following should produce big endian numbers:
1> I = 258.
258
2> B = <<I:4/big-signed-integer-unit:8>>.
<<0,0,1,2>>
[1] http://erlang.org/doc/programming_examples/bit_syntax.html

Getting Garbage value while convert into long Objective -C

I am trying to convert NSString to long but I am getting garbage value. Below is my code :
long t1 = [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000;
long t2 = [[jsonDict valueForKeyPath:#"detail.fee"]doubleValue] * 10000000000000000;
NSLog(#"t1: %ld",t1);
NSLog(#"t2: %ld",t2);
detail.amout = 51.74
detail.fee = 2.72
O/P :
t1: 9223372036854775807 (Getting Garbage value here)
t2: 27200000000000000 (Working fine)
Thanks in advance.
Each number types (int, long, double, float) has limits. For your long 64 bit (because your device is 64bit) number the upper limit is :9,223,372,036,854,775,807 (see here: https://en.wikipedia.org/wiki/9,223,372,036,854,775,807)
In your case, 51.74 * 1,000,000,000,000,000,000 =
51,740,000,000,000,000,000
While Long 64bit only has a maximum of
9,223,372,036,854,775,807
So an overflow happens at 9,223,372,036,854,775,808 and above. Which is what your calculation evaluates into.
Also to note, that what you are doing will also cause problem if you only cater for 64bit long range, because what happens when your app runs on a 32bit (like iPhone 5c or below)?
Generally a bad idea to use large numbers, unless you're doing complex maths. If number accuracies are not critical, then you should consider simplifying the number like 51,740G (G = Giga). etc.
It's because you're storing the product to long type variables t1 and t2.
Use either float or double, and you'll get the correct answer.
Based on C's data types:
Long signed integer type. Capable of containing at least the
[−2,147,483,647, +2,147,483,647] range; thus, it is at least 32
bits in size.
Ref: https://en.wikipedia.org/wiki/C_data_types
9223372036854775807 is the maximum value of a 64-bit signed long. I deduce that [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000 is larger than the maximum long value, so when you cast it to long, you get the closest value that long can represent.
As you read, it is not possible with long. Since it looks like you do finance math, you should use NSDecimalNumber instead of double to solve that problem.

set of WideChar: Sets may have at most 256 elements

I have this line:
const
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
The above does not compile, with error:
[Error] Sets may have at most 256 elements
But this line does compile ok:
var WS: WideString;
if WS[1] in [WideChar('A')..WideChar('Z')] then...
And this also compiles ok:
const
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
...
if WS[1] in MY_SET then...
Why is that?
EDIT: My question is why if WS[1] in [WideChar('A')..WideChar('Z')] compiles? and why MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')]; compiles? aren't they also need to apply to the set rules?
A valid set has to obey two rules:
Each element in a set must have an ordinal value less than 256.
The set must not have more than 256 elements.
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
Here you declare a set type (Set of WideChar) which has more than 256 elements -> Compiler error.
if WS[1] in [WideChar('A')..WideChar('Z')]
Here, the compiler sees WideChar('A') as an ordinal value. This value and all other values in the set are below 256. This is ok with rule 1.
The number of unique elements are also within limits (Ord('Z')-Ord('A')+1), so the 2nd rules passes.
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
Here you declare a set that also fulfills the requirements as above. Note that the compiler sees this as a set of ordinal values, not as a set of WideChar.
A set can have no more than 256 elements.
Even with so few elements the set already uses 32 bytes.
From the documentation:
A set is a bit array where each bit indicates whether an element is in the set or not. The maximum number of elements in a set is 256, so a set never occupies more than 32 bytes. The number of bytes occupied by a particular set is equal to
(Max div 8) - (Min div 8) + 1
For this reason only sets of byte, (ansi)char, boolean and enumerations with fewer than 257 elements are possible.
Because widechar uses 2 bytes it can have 65536 possible values.
A set of widechar would take up 8Kb, too large to be practical.
type
Capitals = 'A'..'Z';
const
MY_SET: set of Capitals = [WideChar('A')..WideChar('Z')];
Will compile and work the same.
It does seem a bit silly to use widechar if your code ignores unicode.
As written only the English capitals are recognized, you do not take into account different locales.
In this case it would be better to use code like
if (AWideChar >= 'A') and (AWideChar <= 'Z') ....
That will work no matter how many chars fall in between.
Obviously you can encapsulate this in a function to save on typing.
If you insist on having large sets, see this answer: https://stackoverflow.com/a/2281327/650492

Is that a bug, when I send zero to to funtion luaO_ceillog2?

I'a reading lua source code which version is 5.3. And i found the function
int luaO_ceillog2 (unsigned int x) in lobject.c file doest't take a special discuss for 0. When 0 was send to this fuction, it would return 32. Does this is a bug? I was confused.
luaO_ceillog2 is a function that's only used internally. Its name infers that it calculates ceil (maximum number that's not less than) of log2 of the argument.
Mathematically, logbx is only valid for x who is positive. So 0 is not a valid argument for this function, I don't think this counts as a bug.

integer division

By definition the integer division returns the quotient.
Why 4613.9145 div 100. gives an error ("bad argument") ?
For div the arguments need to be integers. / accepts arbitrary numbers as arguments, especially floats. So for your example, the following would work:
1> 4613.9145 / 100.
46.139145
To contrast the difference, try:
2> 10 / 10.
1.0
3> 10 div 10.
1
Documentation: http://www.erlang.org/doc/reference_manual/expressions.html
Update: Integer division, sometimes denoted \, can be defined as:
a \ b = floor(a / b)
So you'll need a floor function, which isn't in the standard lib.
% intdiv.erl
-module(intdiv).
-export([floor/1, idiv/2]).
floor(X) when X < 0 ->
T = trunc(X),
case X - T == 0 of
true -> T;
false -> T - 1
end;
floor(X) ->
trunc(X) .
idiv(A, B) ->
floor(A / B) .
Usage:
$ erl
...
Eshell V5.7.5 (abort with ^G)
> c(intdiv).
{ok,intdiv}
> intdiv:idiv(4613.9145, 100).
46
Integer division in Erlang, div, is defined to take two integers as input and return an integer. The link you give in an earlier comment, http://mathworld.wolfram.com/IntegerDivision.html, only uses integers in its examples so is not really useful in this discussion. Using trunc and round will allow you use any arguments you wish.
I don't know quite what you mean by "definition." Language designers are free to define operators however they wish. In Erlang, they have defined div to accept only integer arguments.
If it is the design decisions of Erlang's creators that you are interested in knowing, you could email them. Also, if you are curious enough to sift through the (remarkably short) grammar, you can find it here. Best luck!
Not sure what you're looking for, #Bertaud. Regardless of how it's defined elsewhere, Erlang's div only works on integers. You can convert the arguments to integers before calling div:
trunc(4613.9145) div 100.
or you can use / instead of div and convert the quotient to an integer afterward:
trunc(4613.9145 / 100).
And trunc may or may not be what you want- you may want round, or floor or ceiling (which are not defined in Erlang's standard library, but aren't hard to define yourself, as miku did with floor above). That's part of the reason Erlang doesn't assume something and do the conversion for you. But in any case, if you want an integer quotient from two non-integers in Erlang, you have to have some sort of explicit conversion step somewhere.

Resources