What does the function 'vDSP_vfltu16' (in vDSP) actually do? - ios

It is a function in vDSP in iOS. The reference said this function
Converts an array of unsigned 16-bit integers to single-precision floating-point values.
But what actually is created? For example, I have a series of 16-bit integers storing phonetic samples. What do I actually get when I call this function?

Nothing is created. You pass in an array of N unsigned 16 bit short ints in the A parameter and an array of N floats in the __vDSP_C parameter and the routine converts the unsigned short int values to floats. E.g. if A[0] = 42 then __vDSP_C[0] will be set to 42.0f.
void vDSP_vfltu16 (
unsigned short *A,
vDSP_Stride __vDSP_I,
float *__vDSP_C,
vDSP_Stride __vDSP_K,
vDSP_Length __vDSP_N
);
There is reasonable documentation on developer.apple.com: https://developer.apple.com/library/mac/#documentation/Accelerate/Reference/vDSPRef/Reference/reference.html

Related

Writing UInt16List via IOSink.Add, what's the result?

Trying to write audio samples to a file.
I have List of 16-bit ints
UInt16List _samples = new UInt16List(0);
I add elements to this list as samples come in.
Then I can write to an IOSink like so:
IOSink _ios = ...
List<int> _toWrite;
_toWrite.addAll(_samples);
_ios.add(_toWrite);
or
_ios.add(_samples);
just works, no issues with types despite the signature of add taking List<int> and not UInt16List.
As I read, in Dart the 'int' type is 64 bit.
Are both writes above identical? Do they produce packed 16-bit ints in this file?
A Uint16List is-a List<int>. It's a list of integers which truncates writes to 16-bits, and always reads out 16-bit integers, but it is a list of integers.
If you copy those integers to a plain growable List<int>, it will contain the same integer values.
So, doing ios.add(_sample) will do the same as ios.add(_toWrite), and most likely neither does what you want.
The IOSink's add method expects a list of bytes. So, it will take a list of integers and assume that they are bytes. That means that it will only use the low 8 bits of each integer, which will likely sound awful if you try to play that back as a 16-bit audio sample.
If you want to store all 16 bits, you need to figure out how to store each 16-bit value in two bytes. The easy choice is to just assume that the platform byte order is fine, and do ios.add(_samples.buffer.asUint8List(_samples.offsetInBytes, _samples.lengthInBytes)). This will make a view of the 16-bit data as twice as many bytes, then write those bytes.
The endianness of those bytes (is the high byte first or last) depends on the platform, so if you want to be safe, you can convert the bytes to a fixed byte order first:
if (Endian.host == Endian.little) {
ios.add(
_samples.buffer.asUint8List(_samples.offsetInBytes, _samples.lengthInBytes);
} else {
var byteData = ByteData(_samples.length * 2);
for (int i = 0; i < _samples.length; i++) {
byteData.setUint16(i * 2, _samples[i], Endian.little);
}
var littleEndianData = byteData.buffer.asUint8List(0, _samples.length * 2);
ios.add(littleEndianData);
}

Is there a constant for max/min int/double value in dart?

Is there a constant in dart that tells us what is the max/min int/double value ?
Something like double.infinity but instead double.maxValue ?
For double there are
double.maxFinite (1.7976931348623157e+308)
double.minPositive (5e-324)
In Dart 1 there was no such number for int. The size of integers was limited only by available memory
In Dart 2 int is limited to 64 bit, but it doesn't look like there are constants yet.
For dart2js different rules apply
When compiling to JavaScript, integers are therefore restricted to 53 significant bits because all JavaScript numbers are double-precision floating point values.
I found this from dart_numerics package.
here you are the int64 max value:
const int intMaxValue = 9223372036854775807;
for dart web is 2^53-1:
const int intMaxValue = 9007199254740991;
"The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent integers between -(2^53 - 1) and 2^53 - 1." see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER
No,
Dart does not have a built-in constant for the max value of an int
but this is how to get the max value
Because ints are signed in Dart, they have a range (inclusive) of [-2^31, 2^31-1] if 32-bit and [-2^63, 2^63 - 1] if 64-bit. The first bit in an int is called the 'sign-bit'. If the sign-bit is 1, the int is negative; if 0, the int is non-negative. In the max int, all the bits are 1 except the sign bit, which is 0. We can most easily achieve this by writing the int in hexadecimal notation (integers preceded with '0x' are hexadecimal):
int max = 0x7fffffff; // 32-bit
int max = 0x7fffffffffffffff; // 64-bit
In hexadecimal (a.k.a. hex), each hex digit specifies a group of 4 bits, since there are 16 hex digits (0-f), there are 2 bit digits (0-1), and 2^4 = 16. There is a compile error if more bits than the bitness are specified; if fewer bits than the bitness were specified, then the hexadecimal integer will be padded with 0's until the number of bits is the bitness. So, to indicate that all the bits are 1 except for the sign-bit, we will need to use bitness / 4 hex characters (e.g. 16 for 64-bit architecture). The first hex character will represent the binary integer '0111' (7), which is 0x7, and all the other hex characters will represent the binary integer '1111' (15), or 0xf.
Alternatively, you could use bit-shifting, which I will not explain, but feel free to Google it.
int bitness = ... // presumably 64
int max = (((1 << (bitness - 2)) - 1) << 1) + 1;
Use double.maxFinite() to convert int
const maxValue = double.maxFinite.toInt();
const minValue = -double.maxFinite.toInt();

Convert first two bytes of Lua string (in bigendian format) to unsigned short number

I want to have a lua function that takes a string argument. String has N+2 bytes of data. First two bytes has length in bigendian format, and rest N bytes contain data.
Say data is "abcd" So the string is 0x00 0x04 a b c d
In Lua function this string is an input argument to me.
How can I calculate length optimal way.
So far I have tried below code
function calculate_length(s)
len = string.len(s)
if(len >= 2) then
first_byte = s:byte(1);
second_byte = s:byte(2);
//len = ((first_byte & 0xFF) << 8) or (second_byte & 0xFF)
len = second_byte
else
len = 0
end
return len
end
See the commented line (how I would have done in C).
In Lua how do I achieve the commented line.
The number of data bytes in your string s is #s-2 (assuming even a string with no data has a length of two bytes, each with a value of 0). If you really need to use those header bytes, you could compute:
len = first_byte * 256 + second_byte
When it comes to strings in Lua, a byte is a byte as this excerpt about strings from the Reference Manual makes clear:
The type string represents immutable sequences of bytes. Lua is 8-bit clean: strings can contain any 8-bit value, including embedded zeros ('\0'). Lua is also encoding-agnostic; it makes no assumptions about the contents of a string.
This is important if using the string.* library:
The string library assumes one-byte character encodings.
If the internal representation in Lua of your number is important, the following excerpt from the Lua Reference Manual may be of interest:
The type number uses two internal representations, or two subtypes, one called integer and the other called float. Lua has explicit rules about when each representation is used, but it also converts between them automatically as needed.... Therefore, the programmer may choose to mostly ignore the difference between integers and floats or to assume complete control over the representation of each number. Standard Lua uses 64-bit integers and double-precision (64-bit) floats, but you can also compile Lua so that it uses 32-bit integers and/or single-precision (32-bit) floats.
In other words, the 2 byte "unsigned short" C data type does not exist in Lua. Integers are stored using the "long long" type (8 byte signed).
Lastly, as lhf pointed out in the comments, bitwise operations were added to Lua in version 5.3, and if lhf is the lhf, he should know ;-)

Handling 64 bit integers on a 32 bit iPhone

I would like to handle 64 bit unsigned integers on a iPhone 4s (which of course has a 32 bit ARM 6 processor).
When trying to work with 64 bit unsigned integers, e.g. Twitter IDs, I have the following problem:
// Array holding the 64 bit integer IDs of the Tweets in a timeline:
NSArray *Ids =[timelineData valueForKeyPath:#"id"];
// I would like to get the minimum of these IDs:
NSUInteger minId = (NSUInteger) Ids.lastObject;
The array Ids contains the following numbers (= Tweet Ids):
491621469123018752,
491621468917477377,
491621465544851456,
491621445655867393
However, minId returns the incorrect value of 399999248 (instead of 491621445655867393)
How can I find the minimum or the last object in an Array of 64 bit integers on an iPhone 4s?
You need to use a type that is always 64 bit instead of NSUInteger. You can use uint64_t or unsigned long long. You also need to get the integer value out of the NSNumber (arrays can't store C types). to do this you need to call
uint64_t minID = [Ids.lastObject longLongValue];
Edit
Changed to use uint64_t in example code as it has been correctly pointed out this shows your intent better.

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

Resources