Is that a bug, when I send zero to to funtion luaO_ceillog2? - lua

I'a reading lua source code which version is 5.3. And i found the function
int luaO_ceillog2 (unsigned int x) in lobject.c file doest't take a special discuss for 0. When 0 was send to this fuction, it would return 32. Does this is a bug? I was confused.

luaO_ceillog2 is a function that's only used internally. Its name infers that it calculates ceil (maximum number that's not less than) of log2 of the argument.
Mathematically, logbx is only valid for x who is positive. So 0 is not a valid argument for this function, I don't think this counts as a bug.

Related

Bad argument to 'random'

This is my code
while true do
script.Parent.Position = Vector3.new((math.random(-41.994,15.471)),0.5,(math.random(129.514,69.442)))
script.Parent.Color = Color3.new(math.random(0,255), math.random(0,255), math.random(0,255))
wait(1)
end
The programming language I am using is Lua
When I try to use this code I am presented with this error:
"15:50:47.926 - Workspace.rock outer walls.Model.Rocks.Part0.Script:2: bad argument #2 to 'random' (interval is empty)"
The purpose of the code is to randomly teleport the part the script is in around but not to far away and at the same y axis.
Can somebody please give me some form of explanation
Ps. A while ago I made a rude post on this website because I was confused at how to do a lot of things and now I understand some stuff better so I would like to apologize for my idiocy ~Zeeen
In Lua, math.random can be called 3 ways:
with no arguments
with 1 integer argument
with 2 integer arguments
It does not accept values like -41.994 or 15.471 this is why you're getting the error.
If your change your values to -41 or 15 you shouldn't see an error any more.
Lua 5.3 reference manual: http://www.lua.org/manual/5.3/manual.html#pdf-math.random
math.random ([m [, n]])
When called without arguments, returns a pseudo-random float with uniform distribution in the range [0,1). When called with two integers m and n, math.random returns a pseudo-random integer with uniform distribution in the range [m, n]. (The value n-m cannot be negative and must fit in a Lua integer.) The call math.random(n) is equivalent to math.random(1,n).
This function is an interface to the underling pseudo-random generator function provided by C.
As Nifim's answer correctly points out, there are three ways to call math.random in Lua.
With no arguments, it returns a real number in the range 0.0 to 1.0.
With one or two integer arguments, it returns an integer.
None of these directly give you want you want, which I presume is a random real number in a specified range.
To do that, you'll need to call math.random with no arguments and then adjust the result.
For example, if you wanted a random number between 5.0 and 10.0, you could use
math.random() * 5.0 + 5.0
Consider writing your own wrapper function that takes two floating-point arguments and calls math.random.
function random_real(x, y)
return x + math.random() * (y-x)
end

Getting the size in bytes of an arbitrary integer

Given an integer, 98749287 say, is there some built-in/libray function, either Erlang or Elixir, for getting the size in bytes?
To clarify, the minimum number of bytes used to represent the number in binary.
Seems simple, and have written a function using the "division by base" method and then counting bits, but after some hrs of searching docs havent found anything for what would seem useful to have.
If you have an unsigned integer, you can use the following snippet:
byte_size(binary:encode_unsigned(Integer))
Example:
1> byte_size(binary:encode_unsigned(3)).
1
2> byte_size(binary:encode_unsigned(256)).
2
3> byte_size(binary:encode_unsigned(98749287)).
4
Try this expression:
Value = (... your input ...),
NumBytes = size(integer_to_binary(Value, 2) + 7) div 8.
Reference: http://www.erlang.org/doc/man/erlang.html#integer_to_binary-2

Computing UILabel height & UIFont height (for number of lines) using ceil() or roundf()?

I have this values that i've logged:
label.frame.size.height :18.000000, label.font.lineHeight: 17.895000
if i use roundf() like:
roundf(label.frame.size.height / label.font.lineHeight) // answer: 1
while with ceil()
ceil(label.frame.size.height / label.font.lineHeight) // answer: 2
but when computed manually: answer is 1.00586756
I wonder whats the best and more reliable(generally) between this two. Why is everybody using ceil() to determine the number of lines of UILabel?
In the case of number of lines each letter after the limit a line could display should be taken to next line so .005 is also significant this .005 part of the text should carry to next line. So it is better to use ceil() rather than roundf( ). In roundf( ) a value will be significant only when it is greater or equal to its half value)
ceil()
The C library function ceil(x) returns the smallest integer value greater than or equal to x.
I still dont understand why must of the people use ceil() when computing the number of line since roundf() is more accurate..
But when talking about computing for the number of line.. i look to me that 'roundf()' is indeed more accurate, but since its number of lines.. decimal values are not significant..
Computing what is the image:
54 / 17.895000 = 3.01760268
And numberOflines = 3
if we use roundf() answer would be 3 as well
while if ceil() is already 4
therefore using floor() or simply converting the result to int will do the work:
int result = (int)floor(answer);
//or
int result = (int)answer;
About my question, i think roundf() to the work for me for computing number of lines generally..
I'm making a class that will compute the number of line base from this values, and will be used by the whole app.

Objective C ceil returns wrong value

NSLog(#"CEIL %f",ceil(2/3));
should return 1. However, it shows:
CEIL 0.000000
Why and how to fix that problem? I use ceil([myNSArray count]/3) and it returns 0 when array count is 2.
The same rules as C apply: 2 and 3 are ints, so 2/3 is an integer divide. Integer division truncates so 2/3 produces the integer 0. That integer 0 will then be cast to a double precision float for the call to ceil, but ceil(0) is 0.
Changing the code to:
NSLog(#"CEIL %f",ceil(2.0/3.0));
Will display the result you're expecting. Adding the decimal point causes the constants to be recognised as double precision floating point numbers (and 2.0f is how you'd type a single precision floating point number).
Maudicus' solution works because (float)2/3 casts the integer 2 to a float and C's promotion rules mean that it'll promote the denominator to floating point in order to divide a floating point number by an integer, giving a floating point result.
So, your current statement ceil([myNSArray count]/3) should be changed to either:
([myNSArray count] + 2)/3 // no floating point involved
Or:
ceil((float)[myNSArray count]/3) // arguably more explicit
2/3 evaluates to 0 unless you cast it to a float.
So, you have to be careful with your values being turned to int's before you want.
float decValue = (float) 2/3;
NSLog(#"CEIL %f",ceil(decValue));
==>
CEIL 1.000000
For you array example
float decValue = (float) [myNSArray count]/3;
NSLog(#"CEIL %f",ceil(decValue));
It probably evaluates 2 and 3 as integers (as they are, obviously), evaluates the result (which is 0), and then converts it to float or double (which is also 0.00000). The easiest way to fix it is to type either 2.0f/3, 2/3.0f, or 2.0f/3.0f, (or without "f" if you wish, whatever you like more ;) ).
Hope it helps

How to define -1 as a uint64 in a match clause?

let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.

Resources