Convert UIImage to 8 bits - ios

I wish to convert a UIImage to 8 bits. I have attempted to do this but I am not sure if I have done it right because I am getting a message later when I try to use the image proccessing library leptonica that states it is not 8 bits. Can anyone tell me if I am doing this correctly or the code on how to do it?
Thanks!
CODE
CGImageRef myCGImage = image.CGImage;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(myCGImage));
const UInt8 *imageData = CFDataGetBytePtr(data);

Following code will work for images without alpha channel:
CGImageRef c = [[UIImage imageNamed:#"100_3077"] CGImage];
size_t bitsPerPixel = CGImageGetBitsPerPixel(c);
size_t bitsPerComponent = CGImageGetBitsPerComponent(c);
size_t width = CGImageGetWidth(c);
size_t height = CGImageGetHeight(c);
CGImageAlphaInfo a = CGImageGetAlphaInfo(c);
NSAssert(bitsPerPixel == 32 && bitsPerComponent == 8 && a == kCGImageAlphaNoneSkipLast, #"unsupported image type supplied");
CGContextRef targetImage = CGBitmapContextCreate(NULL, width, height, 8, 1 * CGImageGetWidth(c), CGColorSpaceCreateDeviceGray(), kCGImageAlphaNone);
UInt32 *sourceData = (UInt32*)[((__bridge_transfer NSData*) CGDataProviderCopyData(CGImageGetDataProvider(c))) bytes];
UInt32 *sourceDataPtr;
UInt8 *targetData = CGBitmapContextGetData(targetImage);
UInt8 r,g,b;
uint offset;
for (uint y = 0; y < height; y++)
{
for (uint x = 0; x < width; x++)
{
offset = y * width + x;
if (offset+2 < width * height)
{
sourceDataPtr = &sourceData[y * width + x];
r = sourceDataPtr[0+0];
g = sourceDataPtr[0+1];
b = sourceDataPtr[0+2];
targetData[y * width + x] = (r+g+b) / 3;
}
}
}
CGImageRef newImageRef = CGBitmapContextCreateImage(targetImage);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGContextRelease(targetImage);
CGImageRelease(newImageRef);
With this code I converted an rgb image to an grayscale image:
Hope this helps

Related

OpenCV detect corners of a pattern hidden in a image

I have to create a mobile application able to detect hidden (standard) pattern in a image.
The purpose is to detect corners and get some information from image (like a link).
I'm focusing on iOS for the moment but I don't know how implement the pattern and recognize it with OpenCV
So the first question is: how can I add hidden information in a image?
I found this library that implement steganography to hide some information in a image. Is this the right way?
Next step is detect with the phone's camera the image and the its corners. My idea is to create a standard pattern (like points or lines) to add on a .png image and use template matching to detect, during capture, the area where the pattern is present. But reading online I have saw that this technique is not the best for this problem.
I have successfully implemented the HSV conversation for color-tracking following this tutorial but I don't know how to proceed to the next step.
So, the second question is: how can I recognize a standard pattern and detects corners in a frame captured with the camera?
This is the code that I use to convert the sample buffet to UIImage:
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *yBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = (uint8_t*)malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++) {
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++) {
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(quartzImage);
free(rgbBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
And this is for HSV conversation:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
#autoreleasepool {
if (self.isProcessingFrame) {
return;
}
self.isProcessingFrame = YES;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
cv::Mat matFrame = [self cvMatFromUIImage:image];
cv::cvtColor(matFrame, matFrame, CV_BGR2HSV);
cv::inRange(matFrame, cv::Scalar(0, 100,100,0), cv::Scalar(10,255,255,0), matFrame);
image = [self UIImageFromCVMat:matFrame];
// Convert to base64
NSData *imageData = UIImagePNGRepresentation(image);
NSString *encodedString = [imageData base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];
self.isProcessingFrame = NO;
}
}
I hope some help, thanks!

Why do I see a slightly zoomed version of the image on the iPhone camera compared to what is received at the backend server?

I am capturing the image from the iOS camera on the iPhone7 and sending the captured camera image to the backend for processing.
When I save the image at the backend, I see that the backend image has lot more content than that what was visible on the iOS screen when focusing on the object.
The server image is a zoomed out version with little extra content on the horitonal and vertical axis on both the sides. I verified that I am not doing any explicit zooming or anything like that in the Objective-C code.
The question is what is causing this difference in what I see on the screen v/s what gets received at the backend.
The code that I use to capture the image is
-(UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef) sampleBuffer{
#autoreleasepool {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *baseAddress = (uint8_t*) CVPixelBufferGetBaseAddress(imageBuffer);
uint8_t *yBuffer = (uint8_t*) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = (uint8_t*) CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = (uint8_t*)malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++)
{
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xFF;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:0.5f orientation:UIImageOrientationUp];
NSData *imgData = UIImageJPEGRepresentation(image, 0.8);
NSLog(#"blabla %lu", (unsigned long)[imgData length]);
// Release the Quartz image
free(rgbBuffer);
CGImageRelease(quartzImage);
return (image);

glReadPixels returns incorrect image for iPhone 6, but works ok for iPad and iPhone 5

I'm using following code for reading on image from OpenGL ES scene:
- (UIImage *)drawableToCGImage
{
CGRect myRect = self.bounds;
NSInteger myDataLength = myRect.size.width * myRect.size.height * 4;
glFinish();
glPixelStorei(GL_PACK_ALIGNMENT, 4);
int width = myRect.size.width;
int height = myRect.size.height;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer2);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 < width * 4; x1++) {
buffer[(height - 1 - y1) * width * 4 + x1] = buffer2[y1 * 4 * width + x1];
}
}
free(buffer2);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * myRect.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(myRect.size.width, myRect.size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
It works perfectly for iPad and old iPhone versions, but I noticed that on iPhone 6 (both device and simulators) it looks like monochrome glitches.
What could it be?
Also, here is my code for CAEAGLLayer properties:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
#YES, kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
Could somebody shed the light on this crazy magic, please?
Thanks to #MaticOblak, I've figured out the problem.
Buffer was filled incorrectly, because float values of rect size were not correctly rounded (yeah, only for iPhone 6 dimension). Instead integer values should be used.
UPD: my issue was fixed with the following code:
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = viewport[3];

Create png UIImage from OpenGL drawing

I'm exporting my OpenGL drawings to a UIImage using the following method:
-(UIImage*)saveOpenGLDrawnToUIImage:(NSInteger)aWidth height:(NSInteger)aHeight {
NSInteger myDataLength = aWidth * aHeight * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, aWidth, aHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < aHeight; y++)
{
for(int x = 0; x < aWidth * 4; x++)
{
buffer2[(aHeight-1 - y) * aWidth * 4 + x] = buffer[y * 4 * aWidth + x];
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * aWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(aWidth, aHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
But the background is always black instead of transparent.
I tried using CGImageRef imageRef = CGImageCreateWithPNGDataProvider(provider, NULL, NO, kCGRenderingIntentDefault); but it always generates nil.
How can I get this to process with a transparent background?
This is a question on how to create an image from raw RGBA data. I think you are missing a descriptor on the bitmap info to tell that you want an alpha channel...
Try Creating UIImage from raw RGBA data
The first comment is describing you should use kCGBitmapByteOrder32Big|kCGImageAlphaLast

RTCI420Frame object to a image or texture

I'm working in a WebRTC app for iOS. My goal is record a video from WebRTC objects.
I have the delegate RTCVideoRenderer that provides me this method.
-(void)renderFrame:(RTCI420Frame *)frame{
}
My question is: How can I convert the object RTCI420Frame in a usefull object for show image or save to disk.
RTCI420Frames use the YUV420 format. You can easily convert them to RGB using OpenCV, then convert them to a UIImage. Make sure you #import <RTCI420Frame.h>
-(void) processFrame:(RTCI420Frame *)frame {
cv::Mat mYUV((int)frame.height + (int)frame.chromaHeight,(int)frame.width, CV_8UC1, (void*) frame.yPlane);
cv::Mat mRGB((int)frame.height, (int)frame.width, CV_8UC1);
cvtColor(mYUV, mRGB, CV_YUV2RGB_I420);
UIImage *image = [self UIImageFromCVMat:mRGB];
}
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
You may want to do this on a separate thread, especially if you are doing any video processing. Also, remember to use the .mm file extension so you can use C++.
If you don't want to use OpenCV, it is possible to do it manually. The following code kind of works, but the colors are messed up and it crashes after a few seconds.
int width = (int)frame.width;
int height = (int)frame.height;
uint8_t *data = (uint8_t *)malloc(width * height * 4);
const uint8_t* yPlane = frame.yPlane;
const uint8_t* uPlane = frame.uPlane;
const uint8_t* vPlane = frame.vPlane;
for (int i = 0; i < width * height; i++) {
int rgbOffset = i * 4;
uint8_t y = yPlane[i];
uint8_t u = uPlane[i/4];
uint8_t v = vPlane[i/4];
uint8_t r = y + 1.402 * (v - 128);
uint8_t g = y - 0.344 * (u - 128) - 0.714 * (v - 128);
uint8_t b = y + 1.772 * (u - 128);
data[rgbOffset] = r;
data[rgbOffset + 1] = g;
data[rgbOffset + 2] = b;
data[rgbOffset + 3] = UINT8_MAX;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(gtx);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
free(data);

Resources