Converting Bytes to signed short and int value to bytes - ios

I have int values of two bytes for example 254 = 0xFE, 112 = 0x70.
I need to convert them to signed short. Now the signed short value should be -400.
And then after changing that value I have an integer for example -410 that i need to convert back to two bytes.
How could i achieve that for iOS?

If the bytes are in the native architecture endianness, then it's as simple as
uint8_t *p = someAddress;
short value = *(short *)p;
value = 410;
*(short *)p = value;
However if the bytes are in a foreign endianness you are required to convert each byte of the integer, which is slow. Here is one, of many, examples.

Related

Fast UInt to Float conversion in Swift

I am doing some realtime image analysis on a live videostream. I am using vImage to calculate histograms and vDSP for some further processing. I have Objective-C code that has been working well over the years. I am now about to convert it to Swift. And while it works it is too slow. I have found that the main problem is converting the vImage histogram, which is UInt (vImagePixelCount), to Float that vDSP can handle. In Objective-C I am using vDSP to do the conversion:
err = vImageHistogramCalculation_Planar8(&vBuffY,histogramY, 0);
vDSP_vfltu32((const unsigned int*)histogramY,2,histFloatY,1,256);
However, the vImage histogram is UInt, not UInt32, so I can't use vDSP_vfltu32 in Swift. Instead I am using
let err = vImageHistogramCalculation_Planar8(&vBuffY, &histogramY, 0)
let histFloatY = histogramY.compactMap{ Float($0) }
The problem is that this code is more than 100 times slower than the objective-C version. Are there any alternatives that are faster?
vImageHistogramCalculation_Planar8() writes the histogram into a buffer with 256 elements of type vImagePixelCount which is a type alias for unsigned long in C, and that is a 64-bit integer on 64-bit platforms.
Your Objective-C code “cheats” by casting the unsigned long pointer to an unsigned int pointer in the call to vDSP_vfltu32 () and setting the stride to 2. So what happens here is that the lower 32-bit of each unsigned long are converted to a float. That works as long as the counts do not exceed the value 232-1.
You can do exactly the same in Swift, only that the type casting is here done by “rebinding” the memory:
let err = vImageHistogramCalculation_Planar8(&vBuffY, &histogramY, 0)
histogramY.withUnsafeBytes {
let uint32ptr = $0.bindMemory(to: UInt32.self)
vDSP_vfltu32(uint32ptr.baseAddress!, 2, &histFloatY, 1, 256);
}

Formatting an integer value

Im obtaining an int value from UITextField [self.dbRef.text intValue];
I want to then format that value so I can add a decimal place that precceds the number ie. If [self.dbRef.text intValue]; returns 4 i need that value to be 0.04
So far I have tried various ways including
float Y = ([self.dbRef.text intValue]/100);
slice.value = Y;
NSLog(#"float Y value = %f",Y);
returns zero
NSString* formatedTotalApplianceString = [NSString stringWithFormat:#"0.%#", self.dbRef.text];
NSLog(#"formated string = %#",formatedTotalApplianceString);
int totalAppliances = [formatedTotalApplianceString intValue];
NSLog(#"Resulting int value = %d",[formatedTotalApplianceString intValue]);
slice.value = totalAppliances;
NSLog(#"total appliances int value = %d",totalAppliances);
returns zero
You're doing an integer division, so the 0 value is correct in that context as integers cannot represent fractions (unless you're doing fixed point arithmetics, but that's a different can of worms). You need to do a floating point division, for example:
float Y = ([self.dbRef.text floatValue]/100.0f);
Either the [self.dbRef.text floatValue] or the 100.0f will turn this into a float division, because if the other side would be an int it would automatically get casted to a float. But the "best" way is to have both values of the same type.
Change
float Y = [self.dbRef.text intValue]/100;
to
float Y = ((float)[self.dbRef.text intValue])/100;
in your first variant.
Dividing int by int returns you int result even if then you assign it to float. 4/100 = 0 in such case.
The problem with [self.dbRef.text intValue]/100 is that it's an integer division. It drops the fraction. One way to work around it is to divide by 100.0:
[self.dbRef.text intValue]/100.0
However, this is not the most efficient way of doing it if all you need is adding a zero in front of a fraction: you could avoid float altogether by padding your printed int to two positions with leading zeros:
// If text is 4, the code below prints 0.04
NSLog(#"0.%02d", [self.dbRef.text intValue]);
The first code returns zero because you are performing an integer division, which produces an integer result. You should cast the value to a float.
The second code also returns zero because you're asking for the intValue of a floating point value. So the decimal part will be discarded.
NSString has also a floatValue method, you should use it to get a floating value. Once divided by 100 you will still have a floating point value (in a division if the quotient or the dividend is a float and the other an integer, the integer gets promoted to float):
float Y = ([self.dbRef.text floatValue]/100);
slice.value = Y;

Stored UIImage pixel data into c array, unable to determine array's element count

I initialized the array like so
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, bounds);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
However, when I tried checking the count through an NSLog, I always get 4 (4/1, specifically).
int count = sizeof(rawData)/sizeof(rawData[0]);
NSLog(#"%d", count);
Yet when I NSLog the value of individual elements, it returns non zero values.
ex.
CGFloat f1 = rawData[15];
CGFloat f2 = rawData[n], where n is image width*height*4;
//I wasn't expecting this to work since the last element should be n-1
Finally, I tried
int n = lipBorder.size.width *lipBorder.size.height*4*2; //lipBorder holds the image's dimensions, I tried multiplying by 2 because there are 2 pixels for every CGPoint in retina
CGFloat f = rawData[n];
This would return different values each time for the same image, (ex. 0.000, 115.000, 38.000).
How do I determine the count / how are the values being stored into the array?
rawData is a pointer to unsigned char, as such its size is 32 bits (4 bytes)[1]. rawData[0] is an unsigned char, as such its size is 8 bits (1 byte). Hence, 4/1.
You've probably seen this done with arrays before, where it does work as you would expect:
unsigned char temp[10] = {0};
NSLog(#"%d", sizeof(temp)/sizeof(temp[0])); // Prints 10
Note, however, that you are dealing with a pointer to unsigned char, not an array of unsigned char - the semantics are different, hence why this doesn't work in your case.
If you want the size of your buffer, you'll be much better off simply using height * width * 4, since that's what you passed to malloc anyway. If you really must, you could divide that by sizeof(char) or sizeof(rawData[0]) to get the number of elements, but since they're chars you'll get the same number anyway.
Now, rawData is just a chunk of memory somewhere. There's other memory before and after it. So, if you attempt to do something like rawData[height * width * 4], what you're actually doing is attempting to access the next byte of memory after the chunk allocated for rawData. This is undefined behaviour, and can result in random garbage values being returned[2] (as you've observed), some "unassigned memory" marker value being returned, or a segmentation fault occurring.
[1]: iOS is a 32-bit platform
[2]: probably whatever value was put into that memory location last time it was legitimately used.
The pointer returned by malloc is a void* pointer meaning that it returns a pointer to an address in memory. It seems that the width and the height that are being returned are 0. This would explain why you are only being allocated 4 bytes for your array.
You also said that you tried
int n = lipBorder.size.width *lipBorder.size.height*4*2; //lipBorder holds the image's dimensions, I tried multiplying by 2 because there are 2 pixels for every CGPoint in retina
CGFloat f = rawData[n];
and were receiving different values each time. This behavior is to be expected given that your array is only 4 bytes long and you are accessing an area of memory that is much further ahead in memory. The reason that the value was changing was that you were accessing memory that was not in your array, but in a memory location that was
lipBorder.size.width *lipBorder.size.height*4*2 - 4 bytes passed the end of your array. C in no way prevent you from accessing any memory within your program. If you had accessed memory that is off limits to your program you would have received a segmentation fault.
You can therefore access n + 1 or n + 2 or n + whatever element. It only means that you are accessing memory that is passed the end of your array.
Incrementing the pointer rawdata would move the memory address by one byte. Incrementing and int pointer would increment move the memory address by 4 bytes (sizeof(int)).

Any shortcut to generate a 16 byte random data as Initialisation Vector for AES128 CBC method?

Is there any easy way to generate this kind random data or string? Like an existing function?
You can use SecRandomCopyBytes from Security framework.
This function reads from /dev/random to obtain an array of cryptographically-secure random bytes.
uint8_t vector[16];
SecRandomCopyBytes(kSecRandomDefault, 16, vector);
Four successive calls to arc4random? So something like:
uint32_t initialisationVector[4];
for(int c = 0; c < 4; c++)
initialisationVector[c] = arc4random();
// 16 bytes of random values now sit in initialisationVector
Or, as per Martin R's comment, just do it in one call:
uint8_t initialisationVector[16];
arc4random_buf(initialisationVector, 16);

Clipping when converting signed 16-bit PCM samples to unsigned 8-bit PCM samples

I have signed mono 16-bit PCM audio samples stored in a SInt16 buffer and I am trying to convert them to unsigned mono 8-bit PCM samples stored in a UInt8 buffer. I've written the following basically-working code:
for (int i=0; i < numSamples; i++) {
SInt8 tempSigned8Bit = signed16BitBuffer[i]/127; // In 2 passes
unsigned8BitBuffer[i] = tempSigned8Bit + 127; // for clarity
}
However, I can hear clipping at the maximum amplitudes in the resulting audio, or at least that is my impression of where the distortion is occurring. Is this an artifact of the re-quantization or do I need to include some sort of clamping as described in this question about a similar conversion but without any signedness conversion:
Convert 16 bit pcm to 8 bit
Bitwise optimizations unnecessary but I certainly wouldn't say no to them.
This will fail for large values because you need to divide by 256 not 127. Also the offset needs to be 128, not 127.
for (int i = 0; i < numSamples; i++) {
SInt8 tempSigned8Bit = signed16BitBuffer[i] / 256;
unsigned8BitBuffer[i] = tempSigned8Bit + 128;
}
The conversion for +/- full scale and zero looks like this:
Signed Divide Add
16 bit by 256 128
sample
32767 -> 127 -> 255 ; Full scale +
0 -> 0 -> 128 ; 0
-32768 -> -128 -> 0 ; Full scale -

Resources