I inherited an image filter app, and I'm trying to update it. Apple required me to change the architecture to support 64-bit. On 64-bit phones, the images have vertical black bars (see below). 32 bit phones work as expected.
It seems like this is an issue with the old code assuming a 32-bit system, but how can I fix it?
I've narrowed it down to the following code that applies an image curve:
NSUInteger* currentPixel = _rawBytes;
NSUInteger* lastPixel = (NSUInteger*)((unsigned char*)_rawBytes + _bufferSize);
while(currentPixel < lastPixel)
{
SET_RED_COMPONENT_RGBA(currentPixel, _reds[RED_COMPONENT_RGBA(currentPixel)]);
SET_GREEN_COMPONENT_RGBA(currentPixel, _greens[GREEN_COMPONENT_RGBA(currentPixel)]);
SET_BLUE_COMPONENT_RGBA(currentPixel, _blues[BLUE_COMPONENT_RGBA(currentPixel)]);
++currentPixel;
}
Here are the macro definitions:
#define ALPHA_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 24)
#define BLUE_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 16)
#define GREEN_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 8)
#define RED_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 0)
#define SET_ALPHA_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0x00FFFFFF) | ((unsigned long)value << 24)
#define SET_BLUE_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0xFF00FFFF) | ((unsigned long)value << 16)
#define SET_GREEN_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0xFFFF00FF) | ((unsigned long)value << 8)
#define SET_RED_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0xFFFFFF00) | ((unsigned long)value << 0)
#define BLUE_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 24)
#define GREEN_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 16)
#define RED_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 8)
#define ALPHA_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 0)
#define SET_BLUE_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0x00FFFFFF) | ((unsigned long)value << 24)
#define SET_GREEN_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0xFF00FFFF) | ((unsigned long)value << 16)
#define SET_RED_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0xFFFF00FF) | ((unsigned long)value << 8)
#define SET_ALPHA_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0xFFFFFF00) | ((unsigned long)value << 0)
How should I change the above to work on either a 32 or 64 bit device? Do I need to include more code?
NSUInteger changes size between 32- and 64-bit devices. It used to be 4 bytes; now it's 8. The code assumes it's working with RGBA data, with a byte for each channel, so the increment of an 8-byte pointer is skipping over half the data.
Just be explicit about the size:
uint32_t * currentPixel = _rawBytes;
uint32_t * lastPixel = (uint32_t *)((unsigned char *)_rawBytes + _bufferSize);
and the calculation should work correctly on both types of device.
Related
I write an app for iOS using the TI SensorTag2. Right now i'm stuck with the conversion of data i read over bluetooth to Float for the app.
Following Code is from the released source code of TI for an Objective C App.
-(NSString *) calcValue:(NSData *) value {
char vals[value.length];
[value getBytes:vals length:value.length];
Point3D gyroPoint;
gyroPoint.x = ((float)((int16_t)((vals[0] & 0xff) | (((int16_t)vals[1] << 8) & 0xff00)))/ (float) 32768) * 255 * 1;
gyroPoint.y = ((float)((int16_t)((vals[2] & 0xff) | (((int16_t)vals[3] << 8) & 0xff00)))/ (float) 32768) * 255 * 1;
gyroPoint.z = ((float)((int16_t)((vals[4] & 0xff) | (((int16_t)vals[5] << 8) & 0xff00)))/ (float) 32768) * 255 * 1;
self.gyro = gyroPoint;
Point3D accPoint;
accPoint.x = (((float)((int16_t)((vals[6] & 0xff) | (((int16_t)vals[7] << 8) & 0xff00)))/ (float) 32768) * 8) * 1;
accPoint.y = (((float)((int16_t)((vals[8] & 0xff) | (((int16_t)vals[9] << 8) & 0xff00))) / (float) 32768) * 8) * 1;
accPoint.z = (((float)((int16_t)((vals[10] & 0xff) | (((int16_t)vals[11] << 8) & 0xff00)))/ (float) 32768) * 8) * 1;
self.acc = accPoint;
Point3D magPoint;
magPoint.x = (((float)((int16_t)((vals[12] & 0xff) | (((int16_t)vals[13] << 8) & 0xff00))) / (float) 32768) * 4912);
magPoint.y = (((float)((int16_t)((vals[14] & 0xff) | (((int16_t)vals[15] << 8) & 0xff00))) / (float) 32768) * 4912);
magPoint.z = (((float)((int16_t)((vals[16] & 0xff) | (((int16_t)vals[17] << 8) & 0xff00))) / (float) 32768) * 4912);
self.mag = magPoint;
return [NSString stringWithFormat:#"ACC : X: %+6.1f, Y: %+6.1f, Z: %+6.1f\nMAG : X: %+6.1f, Y: %+6.1f, Z: %+6.1f\nGYR : X: %+6.1f, Y: %+6.1f, Z: %+6.1f",self.acc.x,self.acc.y,self.acc.z,self.mag.x,self.mag.y,self.mag.z,self.gyro.x,self.gyro.y,self.gyro.z];
}
When i try to convert this code to Swift, i get an error "Integer literal '65280' overflows when stored into Int16
let xF: Float = ((Float((Int16(bytes[6]) & 0xff) | ((Int16(bytes[7]) << 8) & 0xff00)) / Float(32768)) * 8) * 1
As i understand that, it combines the 2 Int8 variables into a single Int16 and it should work, i just don't find where i made the error. The part with "& 0xff00" is marked and when i understand it right this is here so only the first 8 bits contain 1's , the rest is 0's
I had it working with code from the android app for SensorTag2, but that code crashes the app from time to time, when i do also read data from gyroscope, so i wanted to use that iOS Code
let x = (Int16(bytes[7]) << 8) + Int16(bytes[6])
let xF = Float(x) / (32768.0 / 8.0)
Maybe somebody here can point me in the right direction to solve my problem
One bad thing in this line:
let xF: Float = ((Float((Int16(bytes[6]) & 0xff) | ((Int16(bytes[7]) << 8) & 0xff00)) / Float(32768)) * 8) * 1
is this: ((Int16(bytes[7]) << 8) & 0xff00).
You know Int16 can represent numbers -32768...32767, and the value of 0xff00 is 65280. As the error message is saying, it is too large for Int16.
(Remember Swift does no implicit conversions for numeric types.)
With making bytes as unsigned:
let bytes = UnsafePointer<UInt8>(data.bytes)
You have no need to use &.
But you need to pass a signed value to Float.init, so your code needs to be something like this:
let xF: Float = ((Float(Int16(bitPattern: UInt16(bytes[6]) | (UInt16(bytes[7]) << 8))) / Float(32768)) * 8) * 1
(Or else, negative values in bytes make your app crash.)
I couldn't use the UInt solution, because i get the data as raw bytes that are encoded as signed int with 16bit.
In ObjectiveC they are read into a char-array, that i converted to Int8
But i found another solution and now i just read the bytes into a Int16 array, that way two following bytes will be treated as a single Int16, like the ObjectiveC code made them already.
var bytes: [Int16] = [Int16](count: data.length, repeatedValue: 0)
let dataLength = data.length
data.getBytes(&bytes, length: dataLength * sizeof(Int16))
That way i don't need the bitshifting and bit operations.
now i have an array of 9 Int16 values, instead of 16 Int8, that i had to convert to Int16.
Also a test with the TI SensorTag App resulted in the same numbers
I have a raw binary image file where every pixel consists of 12 bit data (gray-scale). For example, the first four pixels in the raw file:
0x0 0xC0
0x1 0x05
0x2 0x5C
0x3 0xC0
0x4 0x05
0x5 0x5C
This corresponds to 4 pixel values with the value 0x5C0 (little endian).
Unfortunately, using the following command:
convert -size 384x184 -depth 12 gray:frame_0.raw out.tiff
interprets the pixel values incorrectly (big endian), resulting in the pixel values 0xC00 0x55C 0xC00 0x55C.
I tried the options -endian LSB and -endian MSB, but unfortunately they only change the output byte order, not the input byte order.
How do I get convert to open the raw image as 12-bit little endian data?
I had a quick try at this, but I have no test data but it should be fairly close and easy to detect errors with your images:
// pad12to16.c
// Mark Setchell
// Pad 12-bit data to 16-bit
//
// Compile with:
// gcc pad12to16.c -o pad12to16
//
// Run with:
// ./pad12to16 < 12-bit.dat > 16-bit.dat
#include <stdio.h>
#include <sys/uio.h>
#include <unistd.h>
#include <sys/types.h>
#define BYTESPERREAD 6
#define PIXPERWRITE 4
int main(){
unsigned char buf[BYTESPERREAD];
unsigned short pixel[PIXPERWRITE];
// Read 6 bytes at a time and decode to 4 off 16-bit pixels
while(read(0,buf,BYTESPERREAD)==BYTESPERREAD){
pixel[0] = buf[0] | ((buf[1] & 0xf) << 8);
pixel[1] = (buf[2] << 4) | ((buf[1] & 0xf0) >> 4);
pixel[2] = buf[3] | ((buf[2] & 0xf) << 8);
pixel[3] = (buf[5] << 4) | ((buf[4] & 0xf0) >> 4);
write(1,pixel,PIXPERWRITE*2);
}
return 0;
}
So you would run this (I think):
./pad12to16 < 12-bit.dat | convert -size 384x184 -depth 16 gray:- result.tif
Mark's answer is correct, as you'll need to involve some external tool to sort-out the data stream. Usually there's some sort of padding when working with 12-bit depth. In the example blob provided, we see that the each pair of pixels share a common byte. The task of splitting the shared byte, and shifting what-to-where is fairly easy. This answer compliments Mark's answer, and argues that ImageMagick's C-API might as well be used.
// my12bit_convert.c
#include <stdio.h>
#include <stdlib.h>
#include <magick/MagickCore.h>
#include <wand/MagickWand.h>
static ExceptionType serverty;
#define LEADING_HALF(x) ((x >> 4) & 0xF)
#define FOLLOWING_HALF(x) (x & 0xF)
#define TO_DOUBLE(x) ((double)x / (double)0xFFF);
#define IS_OK(x,y) if(x == MagickFalse) { fprintf(stderr, "%s\n", MagickGetException(y, &serverty)); }
int main(int argc, const char * argv[]) {
// Prototype vars
int
i,
tmp_pixels[2];
double * pixel_buffer;
size_t
w = 0,
h =0,
total = 0,
iterator = 0;
ssize_t
x = 0,
y = 0;
const char
* path = NULL,
* output = NULL;
unsigned char read_pixel_chunk[3];
FILE * fh;
MagickWand * wand;
PixelWand * pwand;
MagickBooleanType ok;
// Iterate over arguments and collect size, input, & output.
for ( i = 1; i < argc; i++ ) {
if (argv[i][0] == '-') {
if (LocaleCompare("size", &argv[i][1]) == 0) {
i++;
if (i == argc) {
fprintf(stderr, "Missing `WxH' argument for `-size'.");
return EXIT_FAILURE;
}
GetGeometry(argv[i], &x, &y, &w, &h);
}
} else if (path == NULL){
path = argv[i];
} else {
output = argv[i];
}
}
// Validate to some degree
if ( path == NULL ) {
fprintf(stderr, "Missing input path\n");
return EXIT_FAILURE;
}
if ( output == NULL ) {
fprintf(stderr, "Missing output path\n");
return EXIT_FAILURE;
}
total = w * h;
if (total == 0) {
fprintf(stderr, "Unable to determine size of %s. (use `-size WxH')\n", path);
return EXIT_FAILURE;
}
// Allocated memory and start the party!
pixel_buffer = malloc(sizeof(double) * total);
MagickWandGenesis();
// Read input file, and sort 12-bit pixels.
fh = fopen(path, "rb");
if (fh == NULL) {
fprintf(stderr, "Unable to read `%s'\n", path);
return 1;
}
while(!feof(fh)) {
total = fread(read_pixel_chunk, 3, 1, fh);
if (total) {
// 0xC0 0x05
// ^------' ==> 0x05C0
tmp_pixels[0] = FOLLOWING_HALF(read_pixel_chunk[1]) << 8 | read_pixel_chunk[0];
// 0x05 0x5C
// '------^ ==> 0x05C0
tmp_pixels[1] = read_pixel_chunk[2] << 4 | LEADING_HALF(read_pixel_chunk[1]);
// 0x5C0 / 0xFFF ==> 0.359463
pixel_buffer[iterator++] = TO_DOUBLE(tmp_pixels[0]);
pixel_buffer[iterator++] = TO_DOUBLE(tmp_pixels[1]);
}
}
fclose(fh);
// Create image
wand = NewMagickWand();
pwand = NewPixelWand();
ok = PixelSetColor(pwand, "white");
IS_OK(ok, wand);
// Create new Image
ok = MagickNewImage(wand, w, h, pwand);
IS_OK(ok, wand);
// Import pixels as gray, or intensity, values.
ok = MagickImportImagePixels(wand, x, y, w, h, "I", DoublePixel, pixel_buffer);
IS_OK(ok, wand);
// Save ouput
ok = MagickWriteImage(wand, output);
IS_OK(ok, wand);
// Clean house
DestroyPixelWand(pwand);
DestroyMagickWand(wand);
MagickWandTerminus();
if (pixel_buffer) {
free(pixel_buffer);
}
return 0;
}
Which can be compiled with
LLVM_CFLAGS=`MagickWand-config --cflags`
LLVM_LDFLAGS=`MagickWand-config --ldflags`
clang $LLVM_CFLAGS $LLVM_LDFLAGS -o my12bit_convert my12bit_convert.c
And usage
./my12bit_convert -size 384x184 frame_0.raw out.tiff
Say I have a NSString
NSString *myIpAddress = #"192.168.1.1"
I want to convert this to a integer - increment it an then convert it back to NSString.
Does iOS have an easy way to do this other than using bit mask and shifting and sprintf?
Something like this is what I do in my app:
NSArray *ipExplode = [string componentsSeparatedByString:#"."];
int seg1 = [ipExplode[0] intValue];
int seg2 = [ipExplode[1] intValue];
int seg3 = [ipExplode[2] intValue];
int seg4 = [ipExplode[3] intValue];
uint32_t newIP = 0;
newIP |= (uint32_t)((seg1 & 0xFF) << 24);
newIP |= (uint32_t)((seg2 & 0xFF) << 16);
newIP |= (uint32_t)((seg3 & 0xFF) << 8);
newIP |= (uint32_t)((seg4 & 0xFF) << 0);
newIP++;
NSString *newIPStr = [NSString stringWithFormat:#"%u.%u.%u.%u",
((newIP >> 24) & 0xFF),
((newIP >> 16) & 0xFF),
((newIP >> 8) & 0xFF),
((newIP >> 0) & 0xFF)];
I use vImageConvert_RGB888toPlanar8 and vImageConvert_Planar8toRGB888 from Accelerate.framework to convert RGB24 to BGR24, but when the data need to transform is very big, such as 3M or 4M, the time need to spend on this is about 10ms. So some one know some fast enough idea?.My code like this:
- (void)transformRGBToBGR:(const UInt8 *)pict{
rgb.data = (void *)pict;
vImage_Error error = vImageConvert_RGB888toPlanar8(&rgb,&red,&green,&blue,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(#"vImageConvert_RGB888toARGB8888 error");
}
error = vImageConvert_Planar8toRGB888(&blue,&green,&red,&bgr,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(#"vImagePermuteChannels_ARGB8888 error");
}
free((void *)pict);
}
With a RGB888ToPlanar8 call you scatter the data and then gather it once again. This is very-very-very bad. If the memory overhead of 33% is affordable, try using the RGBA format and permute the B/R bytes in-place.
If you want to save 33% percents, then I might suggest the following. Iterate all the pixels, but read only a multiple of 4 bytes (since lcm(3,4) is 12, that is 3 dwords).
uint8_t* src_image;
uint8_t* dst_image;
uint32_t* src = (uint32_t*)src_image;
uint32_t* dst = (uint32_t*)dst_image;
uint32_t v1, v2, v3;
uint32_t nv1, nv2, nv3;
for(int i = 0 ; i < num_pixels / 12 ; i++)
{
// read 12 bytes
v1 = *src++;
v2 = *src++;
v3 = *src++;
// shuffle bits in the pixels
// [R1 G1 B1 R2 | G2 B2 R3 G3 | B3 R4 G4 B4]
nv1 = // [B1 G1 R1 B2]
((v1 >> 8) & 0xFF) | (v1 & 0x00FF0000) | ((v1 >> 16) & 0xFF) | ((v2 >> 24) & 0xFF);
nv2 = // [G2 R2 B3 G3]
...
nv3 = // [R3 B4 G4 R4]
...
// write 12 bytes
*dst++ = nv1;
*dst++ = nv2;
*dst++ = nv3;
}
Even better can be done with NEON intrinsics.
See this link from ARM's website to see how the 24-bit swapping is done.
The BGR-to-RGB can be done in-place like this:
void neon_asm_convert_BGR_TO_RGB(uint8_t* img, int numPixels24)
{
// numPixels is divided by 24
__asm__ volatile(
"0: \n"
"# load 3 64-bit regs with interleave: \n"
"vld3.8 {d0,d1,d2}, [%0] \n"
"# swap d0 and d2 - R and B\n"
"vswp d0, d2 \n"
"# store 3 64-bit regs: \n"
"vst3.8 {d0,d1,d2}, [%0]! \n"
"subs %1, %1, #1 \n"
"bne 0b \n"
:
: "r"(img), "r"(numPixels24)
: "r4", "r5"
);
}
Just swap the channels - BGRA to RGBA
- (void)convertBGRAFrame:(const CLPBasicVideoFrame &)bgraFrame toRGBA:(CLPBasicVideoFrame &)rgbaFrame
{
vImage_Buffer bgraImageBuffer = {
.width = bgraFrame.width,
.height = bgraFrame.height,
.rowBytes = bgraFrame.bytesPerRow,
.data = bgraFrame.rawPixelData
};
vImage_Buffer rgbaImageBuffer = {
.width = rgbaFrame.width,
.height = rgbaFrame.height,
.rowBytes = rgbaFrame.bytesPerRow,
.data = rgbaFrame.rawPixelData
};
const uint8_t byteSwapMap[4] = { 2, 1, 0, 3 };
vImage_Error error;
error = vImagePermuteChannels_ARGB8888(&bgraImageBuffer, &rgbaImageBuffer, byteSwapMap, kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(#"%s, vImage error %zd", __PRETTY_FUNCTION__, error);
}
}
I am facing a bit of a challenge trying to convert an aligned array uint8[8] to a double.
It was particularly easy to convert uint8[4] to long with bit-operations, but i understand that the double can become messy in terms of a sign bit?
In Java i simply use ByteBuffer.wrap(bytes).getDouble() but i assume its not that easy in C.
I tried to implement this code, but the last command gives the error Expression is not assignable and Shift count >= width of type
long tempHigh = 0;
long tempLow = 0;
double sum = 0;
tempHigh |= buffer[0] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[1] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[2] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[3] & 0xFF;
tempLow |= buffer[4] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[5] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[6] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[7] & 0xFF;
sum |= ((tempHigh & 0xFFFF) <<= 32) + (tempLow & 0xFFFF);
How can this procedure be done correctly or just resolve the error i have made?
Thanks in advance.
double is a floating-point type; it doesn't support bitwise operations such as |.
You could do something like:
double sum;
memcpy(&sum, buffer, sizeof(sum));
But be aware of endianness issues.
The portable way to do it is to read out the sign, exponent, and mantissa values into integer variables with bitwise arithmetic, then call ldexp to apply the exponent.
OK, here's some code. Beware it might have mismatched parentheses or off-by-one errors.
unsigned char x[8]; // your input; code assumes little endian
long mantissa = ((((((x[6]%16)*256 + x[5])*256 + x[4])*256 + x[3])*256 + x[2])*256 + x[1])*256 + x[0];
int exp = x[7]%128*16 + x[6]/16 - 1023;
int sign = 1-x[7]/128*2;
double y = sign*ldexp(0x1p53 + mantissa, exp-53);
How about a union? Write to the long part as you have, then the double is automagically correct. Something like this:
union
{
double sum;
struct
{
long tempHigh;
long tempLow;
}v;
}u;
u.v.tempHigh = 0;
u.v.tempHigh |= buffer[0] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[1] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[2] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[3] & 0xFF;
u.v.tempLow |= buffer[4] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[5] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[6] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[7] & 0xFF;
printf("%f", u.sum);