I have successfully implemented the multiPeer framework into my app and can easily pass images and strings to other devices. My problem is when I try to pass an NSArray converted to NSData. When the multipeer didReceiveData data func is called i always the following crash:
Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSKeyedUnarchiver initForReadingWithData:]: incomprehensible archive
So heres how I send the data:
var myNSData: NSData = NSKeyedArchiver.archivedDataWithRootObject(arrayOfNumbers)
var error : NSError?
self.session.sendData(myNSData, toPeers: self.session.connectedPeers,
withMode: MCSessionSendDataMode.Reliable, error: &error)
if error != nil {
print("Error sending data: \(error?.localizedDescription)")
}
this is how I have tried to recieve the data:
func session(session: MCSession!, didReceiveData data: NSData!,
fromPeer peerID: MCPeerID!) {
// Called when a peer sends an NSData to us
// This needs to run on the main queue
dispatch_async(dispatch_get_main_queue()) {
// can't convert data back NSArray without crash
var receivedArray:NSArray = NSKeyedUnarchiver.unarchiveObjectWithData(data) as! NSArray
It's true that you haven't provided all the relevant code needed to ascertain the problem; however, I can say this: (1) your approach with respect to using an array makes no sense. I can provide the perfect image-broadcasting code, if you'd like; and, (2) try encoding the array using NSValue, and then archiving the NSValue object.
It's true that you haven't provided all the relevant code needed to ascertain the problem; however, I can say this: (1) your approach with respect to using an array makes no sense. I've provided you with the perfect image-broadcasting code below; and, (2) try encoding the array using NSValue, and then archiving the NSValue object.
Here's the best way to do it:
On the iOS device sending the image data:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(newImage);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
if (image) {
NSData *data = UIImageJPEGRepresentation(image, 0.7);
NSError *err;
[((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
}
}
On the iOS device receiving the image data:
typedef struct {
size_t length;
void *data;
} ImageCacheDataStruct;
- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID
{
dispatch_async(self.imageCacheDataQueue, ^{
dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER);
const void *dataBuffer = [data bytes];
size_t dataLength = [data length];
ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
imageCacheDataStruct->data = (void*)dataBuffer;
imageCacheDataStruct->length = dataLength;
__block const void * kMyKey;
dispatch_queue_set_specific(self.imageDisplayQueue, &kMyKey, (void *)imageCacheDataStruct, NULL);
dispatch_sync(self.imageDisplayQueue, ^{
ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
imageCacheDataStruct = dispatch_queue_get_specific(self.imageDisplayQueue, &kMyKey);
const void *dataBytes = imageCacheDataStruct->data;
size_t length = imageCacheDataStruct->length;
NSData *imageData = [NSData dataWithBytes:dataBytes length:length];
UIImage *image = [UIImage imageWithData:imageData];
if (image) {
dispatch_async(dispatch_get_main_queue(), ^{
[((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
dispatch_semaphore_signal(self.semaphore);
});
}
});
});
}
The reason for the semaphores and the separate GCD queues is simple: you want the frames to display at equal time intervals. Otherwise, the video will seem to slow down at first at times, right before speeding up way past normal in order to catch up. My scheme ensures that each frame plays one after another at the same pace, regardless of network bandwidth bottlenecks.
Related
I am trying to transmit real time video buffers on one iPhone to another iPhone (called client iPhone) for preview display, and also to accept commands from the client iPhone. I am thinking of a standard way to achieve this. The closest thing I found is AVCaptureMultipeerVideoDataOutput on Github.
However that still uses Multipeer connectivity framework and I think it still requires some setup on both iPhones. The thing I want is there should be ideally no setup required on both iPhones, as long as Wifi (or if possible, bluetooth) is enabled on both iPhones, the peers should recognize each other within the app and prompt user about device discovery. What are the standard ways to achieve this and any links to sample code?
EDIT: I got it working through Multipeer connectivity after writing code from scratch. As of now, I am sending the pixel buffers to peer device by downscaling & compressing the data as jpeg. On the remote device, I have UIImage setup where I display the data every frame time. However I think UIKit may not be the best way to display data, even though images are small. How do I display this data using OpenGLES? Is direct decoding of jpeg possible in Opengles?
Comments:
As of now, I am sending the pixel buffers to peer device by
downscaling & compressing the data as jpeg. On the remote device, I
have UIImage setup where I display the data every frame time. However
I think UIKit may not be the best way to display data, even though
images are small.
Turns out, this is the best way to transmit an image via the Multipeer Connectivity framework. I have tried all the alternatives:
I've compressed frames using VideoToolbox. Too slow.
I've compressed frames using Compression. Too slow, but better.
Let me provide some code for #2:
On the iOS device transmitting image data:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
__block uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
dispatch_async(self.compressionQueue, ^{
uint8_t *compressed = malloc(sizeof(uint8_t) * 1228808);
size_t compressedSize = compression_encode_buffer(compressed, 1228808, baseAddress, 1228808, NULL, COMPRESSION_ZLIB);
NSData *data = [NSData dataWithBytes:compressed length:compressedSize];
NSLog(#"Sending size: %lu", [data length]);
dispatch_async(dispatch_get_main_queue(), ^{
__autoreleasing NSError *err;
[((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
});
});
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
On the iOS device displaying image data:
typedef struct {
size_t length;
void *data;
} ImageCacheDataStruct;
- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID
{
NSLog(#"Receiving size: %lu", [data length]);
uint8_t *original = malloc(sizeof(uint8_t) * 1228808);
size_t originalSize = compression_decode_buffer(original, 1228808, [data bytes], [data length], NULL, COMPRESSION_ZLIB);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(original, 640, 480, 8, 2560, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CGImageRelease(newImage);
if (image) {
dispatch_async(dispatch_get_main_queue(), ^{
[((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
});
}
}
Although this code produces original-quality images on the receiving end, you'll find this far too slow for real-time playback.
Here's the best way to do it:
On the iOS device sending the image data:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(newImage);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
if (image) {
NSData *data = UIImageJPEGRepresentation(image, 0.7);
NSError *err;
[((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
}
}
On the iOS device receiving the image data:
- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID
{
dispatch_async(self.imageCacheDataQueue, ^{
dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER);
const void *dataBuffer = [data bytes];
size_t dataLength = [data length];
ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
imageCacheDataStruct->data = (void*)dataBuffer;
imageCacheDataStruct->length = dataLength;
__block const void * kMyKey;
dispatch_queue_set_specific(self.imageDisplayQueue, &kMyKey, (void *)imageCacheDataStruct, NULL);
dispatch_sync(self.imageDisplayQueue, ^{
ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct));
imageCacheDataStruct = dispatch_queue_get_specific(self.imageDisplayQueue, &kMyKey);
const void *dataBytes = imageCacheDataStruct->data;
size_t length = imageCacheDataStruct->length;
NSData *imageData = [NSData dataWithBytes:dataBytes length:length];
UIImage *image = [UIImage imageWithData:imageData];
if (image) {
dispatch_async(dispatch_get_main_queue(), ^{
[((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
dispatch_semaphore_signal(self.semaphore);
});
}
});
});
}
The reason for the semaphores and the separate GCD queues is simple: you want the frames to display at equal time intervals. Otherwise, the video will seem to slow down at first at times, right before speeding up way past normal in order to catch up. My scheme ensures that each frame plays one after another at the same pace, regardless of network bandwidth bottlenecks.
I'm getting a UIImage from a CMSampleBufferRef video buffer every N video frames like:
- (void)imageFromVideoBuffer:(void(^)(UIImage* image))completion {
CMSampleBufferRef sampleBuffer = _myLastSampleBuffer;
if (sampleBuffer != nil) {
CFRetain(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
_lastAppendedVideoBuffer.sampleBuffer = nil;
if (_context == nil) {
_context = [CIContext contextWithOptions:nil];
}
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CGImageRef cgImage = [_context createCGImage:ciImage fromRect:
CGRectMake(0, 0, CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer))];
__block UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
if(completion) completion(image);
return;
}
if(completion) completion(nil);
}
XCode and Instruments detect a Memory Leak, but I'm not able to get rid of it.
I'm releasing the CGImageRef and CMSampleBufferRef as usual:
CGImageRelease(cgImage);
CFRelease(sampleBuffer);
[UPDATE]
I put in the AVCapture output callback to get the sampleBuffer.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (captureOutput == _videoOutput) {
_lastVideoBuffer.sampleBuffer = sampleBuffer;
id<CIImageRenderer> imageRenderer = _CIImageRenderer;
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
CIImage *ciImage = nil;
ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
if(_context==nil) {
_context = [CIContext contextWithOptions:nil];
}
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
//UIImage *image=[UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
NSLog(#"Captured image %#", ciImage);
}
});
The code that leaks is the createCGImage:ciImage:
CGImageRef processedCGImage = [_context createCGImage:ciImage
fromRect:[ciImage extent]];
even having a autoreleasepool, the CGImageRelease of the CGImage reference and a CIContext as instance property.
This seems to be the same issue addressed here: Can't save CIImage to file on iOS without memory leaks
[UPDATE]
The leak seems to be due a bug. The issue is well described in
Memory leak on CIContext createCGImage at iOS 9?
A sample project shows how to reproduce this leak: http://www.osamu.co.jp/DataArea/VideoCameraTest.zip
The last comments assure that
It looks like they fixed this in 9.1b3. If anyone needs a workaround
that works on iOS 9.0.x, I was able to get it working with this:
in a test code (Swift in this case):
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error) return;
__block NSString *filePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"ipdf_pic_%i.jpeg",(int)[NSDate date].timeIntervalSince1970]];
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^
{
#autoreleasepool
{
CIImage *enhancedImage = [CIImage imageWithData:imageData];
if (!enhancedImage) return;
static CIContext *ctx = nil; if (!ctx) ctx = [CIContext contextWithOptions:nil];
CGImageRef imageRef = [ctx createCGImage:enhancedImage fromRect:enhancedImage.extent format:kCIFormatBGRA8 colorSpace:nil];
UIImage *image = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationRight];
[[NSFileManager defaultManager] createFileAtPath:filePath contents:UIImageJPEGRepresentation(image, 0.8) attributes:nil];
CGImageRelease(imageRef);
}
});
}];
and the workaround for iOS9.0 should be
extension CIContext {
func createCGImage_(image:CIImage, fromRect:CGRect) -> CGImage {
let width = Int(fromRect.width)
let height = Int(fromRect.height)
let rawData = UnsafeMutablePointer<UInt8>.alloc(width * height * 4)
render(image, toBitmap: rawData, rowBytes: width * 4, bounds: fromRect, format: kCIFormatRGBA8, colorSpace: CGColorSpaceCreateDeviceRGB())
let dataProvider = CGDataProviderCreateWithData(nil, rawData, height * width * 4) {info, data, size in UnsafeMutablePointer<UInt8>(data).dealloc(size)}
return CGImageCreate(width, height, 8, 32, width * 4, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue), dataProvider, nil, false, .RenderingIntentDefault)!
}
}
We were experiencing a similar issue in an app we created, where we are processing each frame for feature keypoints with OpenCV, and sending off a frame every couple of seconds. After a while of running we would end up with quite a few memory pressure messages.
We managed to rectify this by running our processing code in it's own auto release pool like so (jpegDataFromSampleBufferAndCrop does something similar to what you are doing, with added cropping):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
if ([self.lastFrameSentAt timeIntervalSinceNow] < -kContinuousRateInSeconds) {
NSData *imageData = [self jpegDataFromSampleBufferAndCrop:sampleBuffer];
if (imageData) {
[self processImageData:imageData];
}
self.lastFrameSentAt = [NSDate date];
imageData = nil;
}
}
}
}
I can confirm that this memory leak still exists on iOS 9.2. (I've also posted on the Apple Developer Forum.)
I get the same memory leak on iOS 9.2. I've tested dropping EAGLContext by using MetalKit and MLKDevice. I've tested using different methods of CIContext like drawImage, createCGImage and render but nothing seem to work.
It is very clear that this is a bug as of iOS 9. Try it out your self by downloading the example app from Apple (see below) and then run the same project on a device with iOS 8.4, then on a device with iOS 9.2 and pay attention to the memory gauge in Xcode.
Download https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013109
Add this to the APLEAGLView.h:20
#property (strong, nonatomic) CIContext* ciContext;
Replace APLEAGLView.m:118 with this
[EAGLContext setCurrentContext:_context];
_ciContext = [CIContext contextWithEAGLContext:_context];
And finaly replace APLEAGLView.m:341-343 with this
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
#autoreleasepool
{
CIImage* sourceImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter* filter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, sourceImage, nil];
CIImage* filteredImage = filter.outputImage;
[_ciContext render:filteredImage toCVPixelBuffer:pixelBuffer];
}
glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
I'm still learning about AVFoundation, so I'm unsure how best I should approach the problem of needing to capture a high quality still image, but provide a low-quality preview video stream.
I've got an app that needs to take high quality images (AVCaptureSessionPresetPhoto), but process the preview video stream using OpenCV - for which a much lower resolution is acceptable. Simply using the base OpenCV Video Camera class is no good, as setting the defaultAVCaptureSessionPreset to AVCaptureSessionPresetPhoto results in the full resolution frame being passed to processImage - which is very slow indeed.
How can I have a high-quality connection to the device that I can use for capturing the still image, and a low-quality connection that can be processed and displayed? A description of how I need to set up sessions/connections would be very helpful. Is there an open-source example of such an app?
I did something similar - I grabbed the pixels in the delegate method, made a CGImageRef of them, then dispatched that to the normal priority queue, where it was modified. Since AVFoundation must be using a CADisplayLink for the callback method it has highest priority. In my particular case I was not grabbing all pixels so it worked on an iPhone 4 at 30fps. Depending on what devices you want to run you have number of pixels, fps, etc trade offs.
Another idea is to grab a power of 2 subset of pixels - for instance every 4th in each row and every 4th row. Again I did something similar in my app at 20-30fps. You can then further operate on this smaller image in dispatched blocks.
If this seems daunting offer a bounty for working code.
CODE:
// Image is oriented with bottle neck to the left and the bottle bottom on the right
- (void)captureOutput:(AVCaptureVideoDataOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#if 1
AVCaptureDevice *camera = [(AVCaptureDeviceInput *)[captureSession.inputs lastObject] device];
if(camera.adjustingWhiteBalance || camera.adjustingExposure) NSLog(#"GOTCHA: %d %d", camera.adjustingWhiteBalance, camera.adjustingExposure);
printf("foo\n");
#endif
if(saveState != saveOne && saveState != saveAll) return;
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//NSLog(#"PE: value=%lld timeScale=%d flags=%x", prStamp.value, prStamp.timescale, prStamp.flags);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
NSRange captureRange;
if(saveState == saveOne) {
#if 0 // B G R A MODE !
NSLog(#"PIXEL_TYPE: 0x%lx", CVPixelBufferGetPixelFormatType(imageBuffer));
uint8_t *newPtr = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
NSLog(#"ONE VAL %x %x %x %x", newPtr[0], newPtr[1], newPtr[2], newPtr[3]);
}
exit(0);
#endif
[edgeFinder setupImageBuffer:imageBuffer];
BOOL success = [edgeFinder delineate:1];
if(!success) {
dispatch_async(dispatch_get_main_queue(), ^{ edgeFinder = nil; [delegate error]; });
saveState = saveNone;
} else
bottleRange = edgeFinder.sides;
xRange.location = edgeFinder.shoulder;
xRange.length = edgeFinder.bottom - xRange.location;
NSLog(#"bottleRange 1: %# neck=%d bottom=%d", NSStringFromRange(bottleRange), edgeFinder.shoulder, edgeFinder.bottom );
//searchRows = [edgeFinder expandRange:bottleRange];
rowsPerSwath = lrintf((bottleRange.length*NUM_DEGREES_TO_GRAB)*(float)M_PI/360.0f);
NSLog(#"rowsPerSwath = %d", rowsPerSwath);
saveState = saveIdling;
captureRange = NSMakeRange(0, [WLIPBase numRows]);
dispatch_async(dispatch_get_main_queue(), ^
{
[delegate focusDone];
edgeFinder = nil;
captureOutput.alwaysDiscardsLateVideoFrames = YES;
});
} else {
NSInteger rows = rowsPerSwath;
NSInteger newOffset = bottleRange.length - rows;
if(newOffset & 1) {
--newOffset;
++rows;
}
captureRange = NSMakeRange(bottleRange.location + newOffset/2, rows);
}
//NSLog(#"captureRange=%u %u", captureRange.location, captureRange.length);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Note Apple sample code cheats big time - the phone is big endian so this reverses the "apparent" order of bytes
CGContextRef newContext = CGBitmapContextCreate(NULL, width, captureRange.length, 8, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little); // Video in ARGB format
assert(newContext);
uint8_t *newPtr = (uint8_t *)CGBitmapContextGetData(newContext);
size_t offset = captureRange.location * bytesPerRow;
memcpy(newPtr, baseAddress + offset, captureRange.length * bytesPerRow);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
OSAtomicIncrement32(&totalImages);
int32_t curDepth = OSAtomicIncrement32(&queueDepth);
if(curDepth > maxDepth) maxDepth = curDepth;
#define kImageContext #"kImageContext"
#define kState #"kState"
#define kPresTime #"kPresTime"
CMTime prStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // when it was taken?
//CMTime deStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer); // now?
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSValue valueWithBytes:&saveState objCType:#encode(saveImages)], kState,
[NSValue valueWithNonretainedObject:(__bridge id)newContext], kImageContext,
[NSValue valueWithBytes:&prStamp objCType:#encode(CMTime)], kPresTime,
nil ];
dispatch_async(imageQueue, ^
{
// could be on any thread now
OSAtomicDecrement32(&queueDepth);
if(!isCancelled) {
saveImages state; [(NSValue *)[dict objectForKey:kState] getValue:&state];
CGContextRef context; [(NSValue *)[dict objectForKey:kImageContext] getValue:&context];
CMTime stamp; [(NSValue *)[dict objectForKey:kPresTime] getValue:&stamp];
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImageOrientation orient = state == saveOne ? UIImageOrientationLeft : UIImageOrientationUp;
UIImage *image = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:orient]; // imageWithCGImage: UIImageOrientationUp UIImageOrientationLeft
CGImageRelease(newImageRef);
NSData *data = UIImagePNGRepresentation(image);
// NSLog(#"STATE:[%d]: value=%lld timeScale=%d flags=%x", state, stamp.value, stamp.timescale, stamp.flags);
{
NSString *name = [NSString stringWithFormat:#"%d.png", num];
NSString *path = [[wlAppDelegate snippetsDirectory] stringByAppendingPathComponent:name];
BOOL ret = [data writeToFile:path atomically:NO];
//NSLog(#"WROTE %d err=%d w/time %f path:%#", num, ret, (double)stamp.value/(double)stamp.timescale, path);
if(!ret) {
++errors;
} else {
dispatch_async(dispatch_get_main_queue(), ^
{
if(num) [delegate progress:(CGFloat)num/(CGFloat)(MORE_THAN_ONE_REV * SNAPS_PER_SEC) file:path];
} );
}
++num;
}
} else NSLog(#"CANCELLED");
} );
}
}
In AVCaptureSessionPresetPhoto it use small video preview(about 1000x700 for iPhone6) and high resolution photo(about 3000x2000).
So I use modified 'CvPhotoCamera' class to process small preview and take photo of full-size picture. I post this code here: https://stackoverflow.com/a/31478505/1994445
I need to obtain the UIImage from uncompressed image data from CMSampleBufferRef. I'm using the code:
captureStillImageOutput captureStillImageAsynchronouslyFromConnection:connection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
// that famous function from Apple docs found on a lot of websites
// does NOT work for still images
UIImage *capturedImage = [self imageFromSampleBuffer:imageSampleBuffer];
}
http://developer.apple.com/library/ios/#qa/qa1702/_index.html is a link to imageFromSampleBuffer function.
But it does not work properly. :(
There is a jpegStillImageNSDataRepresentation:imageSampleBuffer method, but it gives the compressed data (well, because JPEG).
How can I get UIImage created with the most raw non-compressed data after capturing Still Image?
Maybe, I should specify some settings to video output? I'm currently using those:
captureStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
captureStillImageOutput.outputSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
I've noticed, that output has a default value for AVVideoCodecKey, which is AVVideoCodecJPEG. Can it be avoided in any way, or does it even matter when capturing still image?
I found something there: Raw image data from camera like "645 PRO" , but I need just a UIImage, without using OpenCV or OGLES or other 3rd party.
The method imageFromSampleBuffer does work in fact I'm using a changed version of it, but if I remember correctly you need to set the outputSettings right. I think you need to set the key as kCVPixelBufferPixelFormatTypeKey and the value as kCVPixelFormatType_32BGRA.
So for example:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* outputSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[newStillImageOutput setOutputSettings:outputSettings];
EDIT
I am using those settings to take stillImages not video.
Is your sessionPreset AVCaptureSessionPresetPhoto? There may be problems with that
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto];
EDIT 2
The part about saving it to UIImage is identical with the one from the documentation. That's the reason I was asking for other origins of the problem, but I guess that was just grasping for straws.
There is another way I know of, but that requires OpenCV.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
I guess that is of no help to you, sorry. I don't know enough to think of other origins for your problem.
Here's a more efficient way:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}
My main problem is i need to obtain a thumbnail for an ALAsset object.
I tried a lot of solutions and searched stack overflow for days, all the solutions i found are not working for me due to these constraint:
I can't use the default thumbnail because it's too little;
I can't use the fullScreen or fullResolution image because i have a lot of images on screen;
I can't use UIImage or UIImageView for resizing because those loads
the fullResolution image
I can't load the image in memory, i'm working with 20Mpx images;
I need to create a 200x200 px version of the original asset to load on screen;
this is the last iteration of the code i came with:
#import <AssetsLibrary/ALAsset.h>
#import <ImageIO/ImageIO.h>
// ...
ALAsset *asset;
// ...
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation];
NSDictionary *thumbnailOptions = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailFromImageAlways,
(id)[NSNumber numberWithFloat:200], kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef generatedThumbnail = [assetRepresentation CGImageWithOptions:thumbnailOptions];
UIImage *thumbnailImage = [UIImage imageWithCGImage:generatedThumbnail];
problem is, the resulting CGImageRef is neither transformed by orientation, nor of the specified max pixel size;
I also tried to find a way of resizing using CGImageSource, but:
the asset url can't be used in the CGImageSourceCreateWithURL:;
i can't extract from ALAsset or ALAssetRepresentation a CGDataProviderRef to use with CGImageSourceCreateWithDataProvider:;
CGImageSourceCreateWithData: requires me to store the fullResolution or fullscreen asset in memory in order to work.
Am i missing something?
Is there another way of obtaining a custom thumbnail from ALAsset or ALAssetRepresentation that i'm missing?
Thanks in advance.
You can use CGImageSourceCreateThumbnailAtIndex to create a small image from a potentially-large image source. You can load your image from disk using the ALAssetRepresentation's getBytes:fromOffset:length:error: method, and use that to create a CGImageSourceRef.
Then you just need to pass the kCGImageSourceThumbnailMaxPixelSize and kCGImageSourceCreateThumbnailFromImageAlways options to CGImageSourceCreateThumbnailAtIndex with the image source you've created, and it will create a smaller version for you without loading the huge version into memory.
I've written a blog post and gist with this technique fleshed out in full.
There is a problem with this approach mentioned by Jesse Rusak. Your app will be crashed with the following stack if asset is too large:
0 CoreGraphics 0x2f602f1c x_malloc + 16
1 libsystem_malloc.dylib 0x39fadd63 malloc + 52
2 CoreGraphics 0x2f62413f CGDataProviderCopyData + 178
3 ImageIO 0x302e27b7 CGImageReadCreateWithProvider + 156
4 ImageIO 0x302e2699 CGImageSourceCreateWithDataProvider + 180
...
Link Register Analysis:
Symbol: malloc + 52
Description: We have determined that the link register (lr) is very likely to contain the return address of frame #0's calling function, and have inserted it into the crashing thread's backtrace as frame #1 to aid in analysis. This determination was made by applying a heuristic to determine whether the crashing function was likely to have created a new stack frame at the time of the crash.
Type: 1
It is very easy to simulate the crash. Let's read data from ALAssetRepresentation in getAssetBytesCallback with a small chunks. The particular size of the chunk is not important. The only thing which matters is calling callback about 20 times.
static size_t getAssetBytesCallback(void *info, void *buffer, off_t position, size_t count) {
static int i = 0; ++i;
ALAssetRepresentation *rep = (__bridge id)info;
NSError *error = nil;
NSLog(#"%d: off:%lld len:%zu", i, position, count);
const size_t countRead = [rep getBytes:(uint8_t *)buffer fromOffset:position length:128 error:&error];
return countRead;
}
Here are tail lines of the log
2014-03-21 11:21:14.250 MRCloudApp[3461:1303] 20: off:2432 len:2156064
MRCloudApp(3461,0x701000) malloc: *** mach_vm_map(size=217636864) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
I introduced a counter to prevent this crash. You can see my fix below:
typedef struct {
void *assetRepresentation;
int decodingIterationCount;
} ThumbnailDecodingContext;
static const int kThumbnailDecodingContextMaxIterationCount = 16;
static size_t getAssetBytesCallback(void *info, void *buffer, off_t position, size_t count) {
ThumbnailDecodingContext *decodingContext = (ThumbnailDecodingContext *)info;
ALAssetRepresentation *assetRepresentation = (__bridge ALAssetRepresentation *)decodingContext->assetRepresentation;
if (decodingContext->decodingIterationCount == kThumbnailDecodingContextMaxIterationCount) {
NSLog(#"WARNING: Image %# is too large for thumbnail extraction.", [assetRepresentation url]);
return 0;
}
++decodingContext->decodingIterationCount;
NSError *error = nil;
size_t countRead = [assetRepresentation getBytes:(uint8_t *)buffer fromOffset:position length:count error:&error];
if (countRead == 0 || error != nil) {
NSLog(#"ERROR: Failed to decode image %#: %#", [assetRepresentation url], error);
return 0;
}
return countRead;
}
- (UIImage *)thumbnailForAsset:(ALAsset *)asset maxPixelSize:(CGFloat)size {
NSParameterAssert(asset);
NSParameterAssert(size > 0);
ALAssetRepresentation *representation = [asset defaultRepresentation];
if (!representation) {
return nil;
}
CGDataProviderDirectCallbacks callbacks = {
.version = 0,
.getBytePointer = NULL,
.releaseBytePointer = NULL,
.getBytesAtPosition = getAssetBytesCallback,
.releaseInfo = NULL
};
ThumbnailDecodingContext decodingContext = {
.assetRepresentation = (__bridge void *)representation,
.decodingIterationCount = 0
};
CGDataProviderRef provider = CGDataProviderCreateDirect((void *)&decodingContext, [representation size], &callbacks);
NSParameterAssert(provider);
if (!provider) {
return nil;
}
CGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);
NSParameterAssert(source);
if (!source) {
CGDataProviderRelease(provider);
return nil;
}
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(source, 0, (__bridge CFDictionaryRef) #{(NSString *)kCGImageSourceCreateThumbnailFromImageAlways : #YES,
(NSString *)kCGImageSourceThumbnailMaxPixelSize : [NSNumber numberWithFloat:size],
(NSString *)kCGImageSourceCreateThumbnailWithTransform : #YES});
UIImage *image = nil;
if (imageRef) {
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
CFRelease(source);
CGDataProviderRelease(provider);
return image;
}