Seed random in 64-Bit - ios

With the iPhone 5S update I want my app to be able to support the new 64-Bit processor.
However, using 64-Bit may cause truncation if a larger data type is casted into a smaller one, as in the case of casting a long into an int. Most of the time this can be easily fixed by just using the bigger data type, but in the case of random number generators which are sometimes seeded by using the "time(NULL)" function I cannot do that.
The current code is simple:
srandom(time(NULL));
But in XCode 5 with 64-Bit it is causing the following error: Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'. This is because "time(NULL)" returns a long integer and "srandom" requires an unsigned int. Therefore there are two options:
Convert the long integer to an unsigned int
Replace "time(NULL)" with another function which does the same job but returns an unsigned int.
Which one would you recommend and what function should I use to do it?
NOTE: I use random() instead of arc4random() because I also need to be able to seed the random number generator in order to get a repeatable outcome.

time() typically returns the number of seconds since the epoch (not counting leap seconds), which means if you use it more than once in a second (or two people run the program at the same time) then it will return the same value, resulting in a repeated sequence even when you don't want it. I recommend against using time(NULL) as a seed, even in the absence of a warning (or error with -Werror) caused by the truncation.
You could use arc4random() to get a random seed instead of a seed based on time. It also happens to return an unsigned 32-bit value which will fix the error you're seeing.
srandom(arc4random());
You might consider moving to Objective-C++ so that you can use the standard C++ <random> library, which is much more powerful and flexible, and which also enables simpler and more direct expression of many ideas, than these other libraries
C++ <random> documentation

On iOS, just use arc4random(3) and don't worry about seeding.

Related

Outputting values from CAMPARY

I'm trying to use the CAMPARY library (CudA Multiple Precision ARithmetic librarY). I've downloaded the code and included it in my project. Since it supports both cpu and gpu, I'm starting with cpu to understand how it works and make sure it does what I need. But the intent is to use this with CUDA.
I'm able to instantiate an instance and assign a value, but I can't figure out how to get things back out. Consider:
#include <time.h>
#include "c:\\vss\\CAMPARY\\Doubles\\src_cpu\\multi_prec.h"
int main()
{
const char *value = "123456789012345678901234567";
multi_prec<2> a(value);
a.prettyPrint();
a.prettyPrintBin();
a.prettyPrintBin_UnevalSum();
char *cc = a.prettyPrintBF();
printf("\n%s\n", cc);
free(cc);
}
Compiles, links, runs (VS 2017). But the output is pretty unhelpful:
Prec = 2
Data[0] = 1.234568e+26
Data[1] = 7.486371e+08
Prec = 2
Data[0] = 0x1.987bf7c563caap+86;
Data[1] = 0x1.64fa5c3800000p+29;
0x1.987bf7c563caap+86 + 0x1.64fa5c3800000p+29;
1.234568e+26 7.486371e+08
Printing each of the doubles like this might be easy to do, but it doesn't tell you much about the value of the 128 number being stored. Performing highly accurate computations is of limited value if there's no way to output the results.
In addition to just printing out the value, eventually I also need to convert these numbers to ints (I'm willing to try it all in floats if there's a way to print, but I fear that both accuracy and speed will suffer). Unlike MPIR (which doesn't support CUDA), CAMPARY doesn't have any associated multi-precision int type, just floats. I can probably cobble together what I need (mostly just add/subtract/compare), but only if I can get the integer portion of CAMPARY's values back out, which I don't see a way to do.
CAMPARY doesn't seem to have any docs, so it's conceivable these capabilities are there, and I've simply overlooked them. And I'd rather ask on the CAMPARY discussion forum/mail list, but there doesn't seem to be one. That's why I'm asking here.
To sum up:
Is there any way to output the 128bit ( multi_prec<2> ) values from CAMPARY?
Is there any way to extract the integer portion from a CAMPARY multi_prec? Perhaps one of the (many) math functions in the library that I don't understand computes this?
There are really only 2 possible answers to this question:
There's another (better) multi-precision library that works on CUDA that does what you need.
Here's how to modify this library to do what you need.
The only people who could give the first answer are CUDA programmers. Unfortunately, if there were such a library, I feel confident talonmies would have known about it and mentioned it.
As for #2, why would anyone update this library if they weren't a CUDA programmer? There are other, much better multi-precision libraries out there. The ONLY benefit CAMPARY offers is that it supports CUDA. Which means the only people with any real motivation to work with or modify the library are CUDA programmers.
And, as the CUDA programmer with the most vested interest in solving this, I did figure out a solution (albeit an ugly one). I'm posting it here in the hopes that the information will be of value to future CAMPARY programmers. There's not much information out there for this library, so this is a start.
The first thing you need to understand is how CAMPARY stores its data. And, while not complex, it isn't what I expected. Coming from MPIR, I assumed that CAMPARY stored its data pretty much the same way: a fixed size exponent followed by an arbitrary number of bits for the mantissa.
But nope, CAMPARY went a different way. Looking at the code, we see:
private:
double data[prec];
Now, I assumed that this was just an arbitrary way of reserving the number of bits they needed. But no, they really do use prec doubles. Like so:
multi_prec<8> a("2633716138033644471646729489243748530829179225072491799768019505671233074369063908765111461703117249");
// Looking at a in the VS debugger:
[0] 2.6337161380336443e+99 const double
[1] 1.8496577979210756e+83 const double
[2] 1.2618399223120249e+67 const double
[3] -3.5978270144026257e+48 const double
[4] -1.1764513205926450e+32 const double
[5] -2479038053160511.0 const double
[6] 0.00000000000000000 const double
[7] 0.00000000000000000 const double
So, what they are doing is storing the max amount of precision possible in the first double, then the remainder is used to compute the next double and so on until they encompass the entire value, or run out of precision (dropping the least significant bits). Note that some of these are negative, which means the sum of the preceding values is a bit bigger than the actual value and they are correcting it downward.
With this in mind, we return to the question of how to print it.
In theory, you could just add all these together to get the right answer. But kinda by definition, we already know that C doesn't have a datatype to hold a value this size. But other libraries do (say MPIR). Now, MPIR doesn't work on CUDA, but it doesn't need to. You don't want to have your CUDA code printing out data. That's something you should be doing from the host anyway. So do the computations with the full power of CUDA, cudaMemcpy the results back, then use MPIR to print them out:
#define MPREC 8
void ShowP(const multi_prec<MPREC> value)
{
multi_prec<MPREC> temp(value), temp2;
// from mpir at mpir.org
mpf_t mp, mp2;
mpf_init2(mp, value.getPrec() * 64); // Make sure we reserve enough room
mpf_init(mp2); // Only needs to hold one double.
const double *ptr = value.getData();
mpf_set_d(mp, ptr[0]);
for (int x = 1; x < value.getPrec(); x++)
{
// MPIR doesn't have a mpf_add_d, so we need to load the value into
// an mpf_t.
mpf_set_d(mp2, ptr[x]);
mpf_add(mp, mp, mp2);
}
// Using base 10, write the full precision (0) of mp, to stdout.
mpf_out_str(stdout, 10, 0, mp);
mpf_clears(mp, mp2, NULL);
}
Used with the number stored in the multi_prec above, this outputs the exact same value. Yay.
It's not a particularly elegant solution. Having to add a second library just to print a value from the first is clearly sub-optimal. And this conversion can't be all that speedy either. But printing is typically done (much) less frequently than computing. If you do an hour's worth of computing and a handful of prints, the performance doesn't much matter. And it beats the heck out of not being able to print at all.
CAMPARY has a lot of shortcomings (undoced, unsupported, unmaintained). But for people who need mp numbers on CUDA (especially if you need sqrt), it's the best option I've found.

Passing BIGINT between Erlang VM and the NIFs

Is there an efficient way to pass BIGINT (integers exceeding 64 bits for x86_64/amd64 architectures) between Erlang VM and the NIFs? So far I haven't found a supporting function in the enif module. Maybe converting BIGINTs to binaries will help, but there might be another good way.
This post from 2011 says there wasn't any support for big integers in the NIF API at the time. I couldn't find any such function in Erlang/OTP 21's documentation, so the statement is likely true as of today as well.
Here's how you could pass a big integer as an array of bytes:
From Erlang, instead of passing the integer directly, pass two values: the sign of the integer and the binary obtained by calling binary:encode_unsigned/1 on the integer.
Integer = ...,
my_nif_function(Integer < 0, binary:encode_unsigned(Integer)).
In the NIF function, you can get access to the bytes of the second argument using enif_inspect_binary:
ErlNifBinary bin;
enif_inspect_binary(env, bin_term, &bin); // make sure to check the return value of this function in the real code
bin.data now points to bin.size bytes, representing the bytes of the integer in Big Endian order (if you want Little Endian, pass little as the second argument to binary:encode_unsigned/2 above).

I am getting warning for generating srandom(time(NULL)) [duplicate]

With the iPhone 5S update I want my app to be able to support the new 64-Bit processor.
However, using 64-Bit may cause truncation if a larger data type is casted into a smaller one, as in the case of casting a long into an int. Most of the time this can be easily fixed by just using the bigger data type, but in the case of random number generators which are sometimes seeded by using the "time(NULL)" function I cannot do that.
The current code is simple:
srandom(time(NULL));
But in XCode 5 with 64-Bit it is causing the following error: Implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'. This is because "time(NULL)" returns a long integer and "srandom" requires an unsigned int. Therefore there are two options:
Convert the long integer to an unsigned int
Replace "time(NULL)" with another function which does the same job but returns an unsigned int.
Which one would you recommend and what function should I use to do it?
NOTE: I use random() instead of arc4random() because I also need to be able to seed the random number generator in order to get a repeatable outcome.
time() typically returns the number of seconds since the epoch (not counting leap seconds), which means if you use it more than once in a second (or two people run the program at the same time) then it will return the same value, resulting in a repeated sequence even when you don't want it. I recommend against using time(NULL) as a seed, even in the absence of a warning (or error with -Werror) caused by the truncation.
You could use arc4random() to get a random seed instead of a seed based on time. It also happens to return an unsigned 32-bit value which will fix the error you're seeing.
srandom(arc4random());
You might consider moving to Objective-C++ so that you can use the standard C++ <random> library, which is much more powerful and flexible, and which also enables simpler and more direct expression of many ideas, than these other libraries
C++ <random> documentation
On iOS, just use arc4random(3) and don't worry about seeding.

Better to use long or long long in 64 bit

In LP64, the size of a long and the size of a long long are the same (Apple Docs, Unix Docs).
Is there any difference then, when limiting yourself to the understanding that you're running on an LP64 system (as XCode appears to when compiling for 64 bit), between a long and a long long? Is there any performance reason to use a long instead of a long long if your goal is a 64 bit integral?
Here's why I ask. In Objective C on Xcode, NSString's format (like printf) and NSNumber both use data types like int, long, long long and their unsigned variants when converting numbers and text and not specific bit length numbers like int16_t, int32. and int64_t. This would make it difficult to program things that require a certain minimum size (i.e. networking or currency applications) or times when you want to store specifically sized data into an NSNumber without typecasting.
Is it safe, limiting to any Intel Mac OS or iOS device, to use int for int32_t and long long for int64_t when interacting with things like NSString's format functions and NSNumber?
Is it safe, limiting to any Intel Mac OS or iOS device, to use int for int32_t and long long for int64_t when interacting with things like NSString's format functions and NSNumber?
According to the ILP32 & LP64 conventions yes, but you should really document that you are relying on these sizes.
One way to do that is to use a clever macro that originated (as I understand) in the Linux kernel:
#define BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
This macro will generate a compile time error if its condition argument is true, as in that case it attempts to determine the size of a negative-sized array. You can use it in the following simple function:
static __attribute__((unused)) void _compile_time_use_only_()
{
BUILD_BUG_ON( (sizeof(int) != 4) );
BUILD_BUG_ON( (sizeof(long long) != 8) );
}
Add that to your code and if you attempt to compile on any system where int is not 32-bits or long long is not 64-bits then you'll get a compile time error. There is essentially zero-cost at runtime (just a few bytes for the unused function).
Make sure you comment the function stating what it does!
You can of course assert the size of other types the same way.
HTH
Use either NSInteger or int64_t. NSInteger = fastest type with at least 32 bits, and compatible with sizes of arrays etc. int64_t = exactly 64 bit. This will also make the move to Swift easier.

Benefits of using NSInteger over int?

I am trying to comprehend how development is affected when developing for both 32-bit and 64-bit architectures. From what I have researched thus far, I understand an int is always 4 bytes regardless of the architecture of the device running the app. But an NSInteger is 4 bytes on a 32-bit device and 8 bytes on a 64-bit device. I get the impression NSInteger is "safer" and recommended but I'm not sure what the reasoning is for that.
My question is, if you know the possible value you're using is never going to be large (maybe you're using it to index into an array of 200 items or store the count of objects in an array), why define it as an NSInteger? That's just going to take up 8 bytes when you won't use it all. Is it better to define it as an int in those cases? If so, in what case would you want to use an NSInteger (as opposed to int or long etc)? Obviously if you needed to utilize larger numbers, you could with the 64-bit architecture. But if you needed it to also work on 32-bit devices, would you not use long long because it's 8 bytes on 32-bit devices as well? I don't see why one would use NSInteger, at least when creating an app that runs on both architectures.
Also I cannot think of a method which takes in or returns a primitive type - int, and instead utilizes NSInteger, and am wondering if there is more to it than just the size of the values. For example, (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section. I'd like to understand why this is the case. Assuming it's possible to have a table with 2,147,483,647 rows, what would occur on a 32-bit device when you add one more - does it wrap around to a -2,147,483,647? And on a 64-bit device it would be 2,147,483,648. (Why return a signed value? I'd think it should be unsigned since you can't have a negative number of rows.)
Ultimately, I'd like to obtain a better understanding of actual use of these number data types, perhaps some code examples would be great!
I personally think that, 64-bit is actually the reason for existence for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
I suggest you to throughly read this link.
CocoaDev has some more info.
For proper format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
I remember when attending iOS developer conference. you have to take a look on the data-type in iOS7. for example, you use NSInteger in 64-bit device and save it on iCloud. then you want to sync to lower device (say iPad 2nd gen), your app will not behave the same, because it recognizes NSInteger in 4 bytes not 8 bytes, then your calculation would be wrong.
But so far, I use NSInteger because mostly my app doesn't use iCloud or doesn't sync. and to avoid compiler warning.
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code.
Apple uses NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
The only thing I would use NSInteger for is passing values to and from an API that specifies it. Other than that it has no advantage over an int or a long. At least with an int or a long you know what format specifiers to use in a printf or similar statement.
As a continue to Irfan's answer:
sizeof(NSInteger)
equals a processor word's size. It is much more simple and faster for processor to operate with words

Resources