I have an 2D array of RGB values (or any such data container) that I need to write to UIView that is currently displayed to the user. An example would be — while using the capture output from the camera, I run some algorithms to identify objects and then highlight them using custom defined RGB pixels.
What is the best way to do this as this whole thing is done in real-time every 10 frames per second for example?
Use the method below to create a UIImage from your 2D array. You can then display this image using a UIImageView.
-(UIImage *)imageFromArray:(void *)array width:(unsigned int)width height:(unsigned int)height {
/*
Assuming pixel color values are 8 bit unsigned
You need to create an array that is in the format BGRA (blue,green,red,alpha).
You can achieve this by implementing a for-loop that sets the values at each index.
I have not included a for-loop in this example because it depends on how the values are stored in your input 2D array.
You can set the alpha value to 255.
*/
unsigned char pixelData[width * height * 4];
// This is where the for-loop would be
void *baseAddress = &pixelData;
size_t bytesPerRow = width * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
return image;
}
Related
I have CMSampleBufferRef(s) which I decode using VTDecompressionSessionDecodeFrame which results in CVImageBufferRef after decoding of a frame has completed, so my questions is..
What would be the most efficient way to display these CVImageBufferRefs in UIView?
I have succeeded in converting CVImageBufferRef to CGImageRef and displaying those by settings CGImageRef as CALayer's content but then DecompressionSession has been configured with #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
Here is example/code how I've converted CVImageBufferRef to CGImageRef (note: cvpixelbuffer data has to be in 32BGRA format for this to work)
CVPixelBufferLockBaseAddress(cvImageBuffer,0);
// get image properties
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cvImageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cvImageBuffer);
size_t width = CVPixelBufferGetWidth(cvImageBuffer);
size_t height = CVPixelBufferGetHeight(cvImageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);
// release context and colorspace
CGContextRelease(cgContext);
CGColorSpaceRelease(colorSpace);
// now CGImageRef can be displayed either by setting CALayer content
// or by creating a [UIImage withCGImage:geImage] that can be displayed on
// UIImageView ...
The #WWDC14 session 513 (https://developer.apple.com/videos/wwdc/2014/#513) hints that YUV -> RGB colorspace conversion (using CPU?) can be avoided and if YUV capable GLES magic is used - wonder what that might be and how this could be accomplished?
Apple's iOS SampleCode GLCameraRipple shows an example of displaying YUV CVPixelBufferRef captured from camera using 2 OpenGLES with separate textures for Y and UV components and a fragment shader program that does the YUV to RGB colorspace conversion calculations using GPU - is all that really required, or is there some more straightforward way how this can be done?
NOTE: In my use case I'm unable to use AVSampleBufferDisplayLayer, due to fact how the input to decompression becomes available.
Update: The original answer below does not work because kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey is unavailable for iOS.
UIView is backed by a CALayer whose contents property supports multiple types of images. As detailed in my answer to a similar question for macOS, it is possible to use CALayer to render a CVPixelBuffer’s backing IOSurface. (Caveat: I have only tested this on macOS.)
If you're getting your CVImageBufferRef from CMSampleBufferRef, which you're receiving from captureOutput:didOutputSampleBuffer:fromConnection:, you don't need to make that conversion and can directly get the imageData out of CMSampleBufferRef. Here's the code:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
UIImage *frameImage = [UIImage imageWithData:imageData];
API description doesn't provide any info about wether its 32BGRA supported or not, and produces imageData, along with any meta-data, in jpeg format without any compression applied. If your goal is to display the image on screen or use with UIImageView, this is the quick way.
How do we fix our code below to properly color the image you see below from our incoming sampleBuffer?
We are attempting to convert an incoming sampleBuffer image to a UIImage but the result is the inverted off-color image you see below.
You can see our attempt to use the kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange option in the code - but the results were the same.
The incoming image we are trying to show has all the right colors as evidenced by the fact that if we render the image into a GLKView, all the colors are there.
Could this be a YUV420 conversion issue?
Here is the conversion code we are using:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
Here is the setup code we use for the incoming CVPixelBuffer:
// Now create the CVPixelBuffer to which we will render
CVReturn retVal = CVPixelBufferCreate(kCFAllocatorDefault,
self.screenWidth,
self.screenHeight,
kCVPixelFormatType_32BGRA,
attrs,
&_outputRenderTarget);
Any suggestions for what to try to restore and display all the proper colors?
There is no problem with the code shown apart from you need to swap back your commented/uncommented lines.
I would take a step back and look at what is writing the data to your incoming buffer. For example I use a AVAssetReaderVideoCompositionOutput object to generate the sample buffers. The initialisation of the video composition object has video settings that match the ones you use for creating your incoming cvpixelbuffer plus a cgimage and bitmap compatibility key.
It turns out we were inverting all the colors with a 1-r, 1-g, 1-b.
Once we removed the inversion the colors appeared normal again.
I have a layer where I want the user to draw a 'mask' for cutting out images. It is semi-opaque so that they can see beneath what they are selecting.
How can I process this so that the drawing data has an alpha of 1.0, but retain the alpha channel (for masking)?
TL:DR - I'd like the black area to be a solid, single colour.
Here is the desired before and after (the white background should be transparent in both):
something like this:
for (pixel in image) {
if (pixel.alpha != 0.0) {
fill solid black
}
}
The following should do what you're after. Majority of the code is from How to set the opacity/alpha of a UIImage? I only added a test for the alpha value, before converting the colour of the pixel to black.
// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [[UIImage imageNamed:#"testImage"] CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//alter the alpha when the alpha of the source != 0
int length = height * width * 4;
for (int i=0; i<length; i+=4) {
if (m_PixelBuf[i+3] != 0) {
m_PixelBuf[i+3] = 255;
}
}
//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);
UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);
finalImage will now contain an image where all pixels that don't have an alpha of 0.0 have alpha of 1.
The underlying model for this app should not be images. This is not a question of "how do I create one rendition of the image from the other."
Instead, the underlying object model should be an array of paths. Then, when you want to create the image with translucent paths vs opaque paths, it's just a question of how you render this array of paths. Once you tackle it that way, the problem is not a complex image manipulation question but a simple rendering question.
By the way, I really like this array-of-paths model, because then it becomes quite trivial to do things like "gee, let me provide an undo function, letting the user remove one stroke at a time." It opens you up to all sorts of nice functional enhancements.
In terms of specifics of how to render these paths, it can be implemented in a variety of different ways. You could use custom drawRect function for UIView subclass that renders the paths with the appropriate alpha. Or you can do it with CAShapeLayer objects, too. Or you can do some hybrid (creating new image snapshots as you finish adding each path, saving you from having to re-render all of the paths each time). There are tons of ways of tackling this.
But the key insight is to employ an underlying model of an array of paths, and then the rendering of your two types of images becomes fairly trivial exercise:
The first image is a rendering of a bunch of paths as CAShapeLayer objects with alpha of 0.5. The second is the same rendering, but with an alpha of 1.0. Again, it doesn't matter if you use shape layers or low level Core Graphics calls, but the underlying idea is the same. Either render your paths with translucency or not.
I am working on a app and making use of the opencv library.
The problem I am having happens only to certain images (usually if made with the phone's camera) and I pinpointed as being just a conversion problem. When I convert a (problematic) Image to a cv::Mat object and then back it just rotates 90 degrees.
Here is the call that causes the problem:
cv::Mat tmpMat = [sentImage CVMat];
UIImage * tmpImage = [[UIImage alloc] initWithCVMat:tmpMat];
[imageHolder setImage: tmpImage];
And here are the functions that do the conversion from image to matrix and vice-versa.
-(cv::Mat)CVMat
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
- (id)initWithCVMat:(const cv::Mat&)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1)
{
colorSpace = CGColorSpaceCreateDeviceGray();
}
else
{
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
self = [self initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return self;
}
Now I am using a "Aspect Fill" property in my imageHolder (a UIImageView) and tried changing it without success. I also tried seeing if it was a problem of a matrix being transposed on the conversion and tried to change without success and it also would not be logical since it does not turn every picture.
I do not understand why it works with some pictures but other not (all photos taken with the phone's camera don't work).
If anyone can shed a light on the matter I would appreciate.
Images from the camera that are taken with different orientations (Portrait / Landscape) are saved in the same resolution (same number of rows and columns) by the iPhone camera. The difference is that the JPEG contains a flag (to be precise, the Exif.Image.Orientation flag) to tell the displaying software how the image needs to be rotated to be displayed correctly.
My guess is that OpenCV looses that information (that is stored in the UIImage in the imageOrientation property) when converting, so when the image is converted back to UIImage this piece of information is set to default (UIImageOrientationUp), explaining why certain images appear to be rotated.
I was having the same issue. This solves the problem of the image rotating when converting from UIImage to cvMat. Add the method at the bottom, call it after you dismiss the picker controller. It is the 'second answer' located here: Rotating a CGImage
Also, there are two methods in the ios.h for UIImage to cvMat and vice versa, that you can just include. highgui/ios.h. Then add the rotation method and you are good to go.
I'm using this well known sample code from Apple to convert camera buffer still images into UIImages.
-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer==NULL) {
NSLog(#"No buffer");
}
// Lock the base address of the pixel buffer
if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
NSLog(#"Buffer locked successfully");
}
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
NSLog(#"bytes per row %zu",bytesPerRow );
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
NSLog(#"width %zu",width);
size_t height = CVPixelBufferGetHeight(imageBuffer);
NSLog(#"height %zu",height);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image= [UIImage imageWithCGImage:quartzImage scale:SCALE_IMAGE_RATIO orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return (image );}
The problem is that usually the image that you obtain is 90° rotated. Using the method +imageWithCGImage:scale:orientation I'm able to rotate it, but before getting into this method I was trying to rotate and scale the image using the CTM function, before passing it to a UIImage. the problem was that CTM transformation didn't affect the image.
I'm asking myself why... is that because I'm locking the buffer? or because the context is created with the image inside, so the changes will affect only the further mod?
Thank you
The answer is that it affects only further modifications, and it has nothing to deal with the buffer locking.
As you can read from this answer mod to the context are applied by time Vertical flip of CGContext