Im using ZXing framework to make an barcode image out of a string. When i try it on the simulator it work, but when I try it on a device, I get a bad access error
ZXMultiFormatWriter *writer = [ZXMultiFormatWriter writer];
ZXBitMatrix* result = [writer encode:self.discountProgramInfo.customerID
format:self.barcodeFormat
width:self.barcodeImageView.frame.size.width
height:self.barcodeImageView.frame.size.height
error:&error];
if (result) {
CGImageRef image = [[ZXImage imageWithMatrix:result] cgimage];
self.barcodeImageView.image = [UIImage imageWithCGImage:image];
// This CGImageRef image can be placed in a UIImage, NSImage, or written to a file.
} else {
NSString *errorMessage = [error localizedDescription];
DLog("%#",errorMessage);
}
It fails on this line: self.barcodeImageView.image = [UIImage imageWithCGImage:image]; and the CGImageRef is not generated properly.
Any ideas what could be causing the mismatch?
Related
I am using the following code to extract depth map (by following Apple's own example):
- (nullable AVDepthData *)depthDataFromImageData:(nonnull NSData *)imageData orientation:(CGImagePropertyOrientation)orientation {
AVDepthData *depthData = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)imageData, NULL);
if (imageSource) {
NSDictionary *auxDataDictionary = (__bridge NSDictionary *)CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
if (auxDataDictionary) {
depthData = [[AVDepthData depthDataFromDictionaryRepresentation:auxDataDictionary error:NULL] depthDataByApplyingExifOrientation:orientation];
}
CFRelease(imageSource);
}
return depthData;
}
And I call this from:
[[PHAssetResourceManager defaultManager] requestDataForAssetResource:[PHAssetResource assetResourcesForAsset:asset].firstObject options:nil dataReceivedHandler:^(NSData * _Nonnull data) {
AVDepthData *depthData = [self depthDataFromImageData:data orientation:[self CGImagePropertyOrientationForUIImageOrientation:pickedUiImageOrientation]];
CIImage *image = [CIImage imageWithDepthData:depthData];
UIImage *uiImage = [UIImage imageWithCIImage:image];
UIGraphicsBeginImageContext(uiImage.size);
[uiImage drawInRect:CGRectMake(0, 0, uiImage.size.width, uiImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *pngData = UIImagePNGRepresentation(newImage);
UIImage* pngImage = [UIImage imageWithData:pngData]; // rewrap
UIImageWriteToSavedPhotosAlbum(pngImage, nil, nil, nil);
} completionHandler:^(NSError * _Nullable error) {
}];
Here is the result: it's a low quality (and rotated but let's put orientation aside for now) image:
Then I've transferred the original HEIC file, opened in Photoshop, went to Channels, and selected depth map as below:
Here is the result:
It's a higher resolution/quality, correctly oriented depth map. Why is the code (actually Apple's own code at https://developer.apple.com/documentation/avfoundation/avdepthdata/2881221-depthdatafromdictionaryrepresent?language=objc) resulting in lower-quality result?
I've found the issue. Actually, it was hiding in plain sight. What is obtained from the +[AVDepthData depthDataFromDictionaryRepresentation:error:] method returns disparity data. I've converted it to depth using the following code:
if(depthData.depthDataType != kCVPixelFormatType_DepthFloat32){
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat32];
}
(Haven't tried but 16-bit Depth, kCVPixelFormatType_DepthFloat16, should also work well)
After converting disparity to depth, the image is exactly the same as in Photoshop. I should have woken up as I was using CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity); (note the "disparity" in the end) and Photoshop was clearly saying "depth map", converting disparity to depth (or just somehow reading as depth, I honestly don't know the physical encoding, maybe iOS was converting depth to disparity when I was copying the aux data in the first place) on the fly.
Side note: I've also solved the orientation issue by creating the image source directly from [PHAsset requestContentEditingInputWithOptions:completionHandler:] method and passing the contentEditingInput.fullSizeImageURL into CGImageSourceCreateWithURL method. It took care of the orientation.
I'm generating a CIImage using a few chained filters and trying to output the generated image in the users photo album for certain debug purposes. The callback I supply to UIImageWriteToSavedPhotosAlbum() always has a nil error returned, so I assume nothing is going wrong. But the image never seems to show up.
I've used this function in the past to dump OpenGL buffers to the photo album for debugging, but I realize this isn't the same case. Should I be doing something differently?
-(void)cropAndSaveImage:(CIImage *)inputImage fromFeature:(CIFaceFeature *)feature
{
// First crop out the face.
[_cropFilter setValue:inputImage forKey:#"inputImage"];
[_cropFilter setValue:[CIVector vectorWithCGRect:feature.bounds] forKey:#"inputRectangle"];
CIImage * croppedImage = _cropFilter.outputImage;
__block CIImage * outImage = croppedImage;
dispatch_async(dispatch_get_main_queue(), ^{
UIImage * outUIImage = [UIImage imageWithCIImage:outImage];
UIImageWriteToSavedPhotosAlbum(outUIImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
});
}
-(void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo
{
NSLog(#"debug LBP output face. error: %#", error);
}
I've verified that the boundaries are never 0.
The callback output is always
debug LBP output face. error: (null)
I figured this out on my own and deleted the question, but then I thought maybe someone will get some use out of it. I say this because I came across an older answer that suggested the original implementation worked. But in actuality in had to do the following to make it work properly.
__block CIImage * outImage = _lbpFilter.outputImage;
dispatch_async(dispatch_get_main_queue(), ^{
CGImageRef imgRef = [self renderCIImage:outImage];
UIImage * uiImage = [UIImage imageWithCGImage:imgRef];
UIImageWriteToSavedPhotosAlbum(uiImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
});
+ (CGImageRef)renderCIImage:(CIImage *)img
{
if ( !m_ctx ) {
NSDictionary * options = #{kCIContextOutputColorSpace:[NSNull null], kCIContextWorkingColorSpace:[NSNull null]};
m_ctx = [CIContext contextWithOptions:options];
}
return [m_ctx createCGImage:img fromRect:img.extent];
}
Using filter.outputImage to convert to CGImage by CIContext.createCGImage() and converting CGImage to UIImage will save image successfully.
I'm getting a UIImage from a CMSampleBufferRef video buffer every N video frames like:
- (void)imageFromVideoBuffer:(void(^)(UIImage* image))completion {
CMSampleBufferRef sampleBuffer = _myLastSampleBuffer;
if (sampleBuffer != nil) {
CFRetain(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
_lastAppendedVideoBuffer.sampleBuffer = nil;
if (_context == nil) {
_context = [CIContext contextWithOptions:nil];
}
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CGImageRef cgImage = [_context createCGImage:ciImage fromRect:
CGRectMake(0, 0, CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer))];
__block UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
if(completion) completion(image);
return;
}
if(completion) completion(nil);
}
XCode and Instruments detect a Memory Leak, but I'm not able to get rid of it.
I'm releasing the CGImageRef and CMSampleBufferRef as usual:
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
[UPDATE]
I put in the AVCapture output callback to get the sampleBuffer.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (captureOutput == _videoOutput) {
_lastVideoBuffer.sampleBuffer = sampleBuffer;
id<CIImageRenderer> imageRenderer = _CIImageRenderer;
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
CIImage *ciImage = nil;
ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
if(_context==nil) {
_context = [CIContext contextWithOptions:nil];
}
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
//UIImage *image=[UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
NSLog(#"Captured image %#", ciImage);
}
});
The code that leaks is the createCGImage:ciImage:
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
even having a autoreleasepool, the CGImageRelease of the CGImage reference and a CIContext as instance property.
This seems to be the same issue addressed here: Can't save CIImage to file on iOS without memory leaks
[UPDATE]
The leak seems to be due a bug. The issue is well described in
Memory leak on CIContext createCGImage at iOS 9?
A sample project shows how to reproduce this leak: http://www.osamu.co.jp/DataArea/VideoCameraTest.zip
The last comments assure that
It looks like they fixed this in 9.1b3. If anyone needs a workaround
that works on iOS 9.0.x, I was able to get it working with this:
in a test code (Swift in this case):
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error) return;
__block NSString *filePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"ipdf_pic_%i.jpeg",(int)[NSDate date].timeIntervalSince1970]];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^
{
#autoreleasepool
{
CIImage *enhancedImage = [CIImage imageWithData:imageData];
if (!enhancedImage) return;
static CIContext *ctx = nil; if (!ctx) ctx = [CIContext contextWithOptions:nil];
CGImageRef imageRef = [ctx createCGImage:enhancedImage fromRect:enhancedImage.extent format:kCIFormatBGRA8 colorSpace:nil];
UIImage *image = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationRight];
[[NSFileManager defaultManager] createFileAtPath:filePath contents:UIImageJPEGRepresentation(image, 0.8) attributes:nil];
CGImageRelease(imageRef);
}
});
}];
and the workaround for iOS9.0 should be
extension CIContext {
func createCGImage_(image:CIImage, fromRect:CGRect) -> CGImage {
let width = Int(fromRect.width)
let height = Int(fromRect.height)
let rawData = UnsafeMutablePointer<UInt8>.alloc(width * height * 4)
render(image, toBitmap: rawData, rowBytes: width * 4, bounds: fromRect, format: kCIFormatRGBA8, colorSpace: CGColorSpaceCreateDeviceRGB())
let dataProvider = CGDataProviderCreateWithData(nil, rawData, height * width * 4) {info, data, size in UnsafeMutablePointer<UInt8>(data).dealloc(size)}
return CGImageCreate(width, height, 8, 32, width * 4, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue), dataProvider, nil, false, .RenderingIntentDefault)!
}
}
We were experiencing a similar issue in an app we created, where we are processing each frame for feature keypoints with OpenCV, and sending off a frame every couple of seconds. After a while of running we would end up with quite a few memory pressure messages.
We managed to rectify this by running our processing code in it's own auto release pool like so (jpegDataFromSampleBufferAndCrop does something similar to what you are doing, with added cropping):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
if ([self.lastFrameSentAt timeIntervalSinceNow] < -kContinuousRateInSeconds) {
NSData *imageData = [self jpegDataFromSampleBufferAndCrop:sampleBuffer];
if (imageData) {
[self processImageData:imageData];
}
self.lastFrameSentAt = [NSDate date];
imageData = nil;
}
}
}
}
I can confirm that this memory leak still exists on iOS 9.2. (I've also posted on the Apple Developer Forum.)
I get the same memory leak on iOS 9.2. I've tested dropping EAGLContext by using MetalKit and MLKDevice. I've tested using different methods of CIContext like drawImage, createCGImage and render but nothing seem to work.
It is very clear that this is a bug as of iOS 9. Try it out your self by downloading the example app from Apple (see below) and then run the same project on a device with iOS 8.4, then on a device with iOS 9.2 and pay attention to the memory gauge in Xcode.
Download https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013109
Add this to the APLEAGLView.h:20
#property (strong, nonatomic) CIContext* ciContext;
Replace APLEAGLView.m:118 with this
[EAGLContext setCurrentContext:_context];
_ciContext = [CIContext contextWithEAGLContext:_context];
And finaly replace APLEAGLView.m:341-343 with this
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
#autoreleasepool
{
CIImage* sourceImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter* filter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, sourceImage, nil];
CIImage* filteredImage = filter.outputImage;
[_ciContext render:filteredImage toCVPixelBuffer:pixelBuffer];
}
glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
When I use initWithCGImage with a certain scale and then UIImageJPEGRepresentation to get data from this image, it seems the system doesn't keep my scale settings. Any idea why ?
Following is my code and the log I get :
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
UIImageOrientation orientation = [self orientationForAsset:asset];
// Scale the image
UIImage* scaledImage = [[UIImage alloc] initWithCGImage:iref scale:2. orientation:orientation];
NSLog (#"Scaled image size %#", NSStringFromCGSize(scaledImage.size));
// Get data from image
NSData* scaledImageData = UIImageJPEGRepresentation(scaledImage, 0.8);
// Check the image size of the data
UIImage* buildedImage = [UIImage imageWithData:scaledImageData];
NSLog (#"Data image size of %#", NSStringFromCGSize (buildedImage.size));
Gives log :
"Scaled image size {1944, 1296}"
"Data image size of {3888, 2592}"
That's really strange because the two images are supposed to be exactly the same.
You should use -[UIImage imageWithData:scale:] method.
I'm working with an application in which I'm loading images from photolibrary.
I'm using the following code for binding the image to imageView.
-(void)loadImage:(UIImageView *)imgView FileName:(NSString *)fileName
{
typedef void (^ALAssetsLibraryAssetForURLResultBlock)(ALAsset *asset);
typedef void (^ALAssetsLibraryAccessFailureBlock)(NSError *error);
ALAssetsLibraryAssetForURLResultBlock resultblock = ^(ALAsset *myasset)
{
ALAssetRepresentation *rep = [myasset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
UIImage *lImage;
if (iref)
{
lImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
}
else
{
lImage = [UIImage imageNamed:#"Nofile.png"];
}
dispatch_async(dispatch_get_main_queue(), ^{
[imgView setImage:lImage];
});
};
ALAssetsLibraryAccessFailureBlock failureblock = ^(NSError *myerror)
{
UIImage *images = [UIImage imageNamed:#"Nofile.png"];
dispatch_async(dispatch_get_main_queue(), ^{
[imgView setImage:images];
});
};
NSURL *asseturl = [NSURL URLWithString:fileName];
ALAssetsLibrary *asset = [[ALAssetsLibrary alloc] init];
[asset assetForURL:asseturl
resultBlock:resultblock
failureBlock:failureblock];
}
But when I tried to run it, an error is coming and the application is crashing sometimes.
The error printed on console is:
** * ERROR: FigCreateCGImageFromJPEG returned -12910. 423114 bytes. We will fall back to software decode.
Received memory warning.
My photo library contains high resolution images and their size between 10-30 MB.
Finally I fixed the issue.
I think the issue is with fetching the full resolution image.
Instead of :
CGImageRef iref = [rep fullResolutionImage];
I used:
CGImageRef iref = [myasset aspectRatioThumbnail];
And everything worked fine. No error in console, no crash, but quality/resolution of the image is reduced.
I have a similar error:
* ERROR: FigCreateCGImageFromJPEG returned -12909. 0 bytes. We will fall back to software decode.
app crush on call:
CGImageRef originalImage = [representation fullResolutionImage];
I fix it by replace to:
CGImageRef originalImage = [representation fullScreenImage];
[UIImage imageWithCGImage:]
imageWithCGImage is a stack memory function, it seems to overflow if the large image.
What about using the heap functions?.
lImage = [[[UIImage alloc]initWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]] autorelease];