Raw image data from camera like "645 PRO" - ios

A time ago I already asked this question and I also got a good answer:
I've been searching this forum up and down but I couldn't find what I
really need. I want to get raw image data from the camera. Up till now
I tried to get the data out of the imageDataSampleBuffer from that
method
captureStillImageAsynchronouslyFromConnection:completionHandler: and
to write it to an NSData object, but that didn't work. Maybe I'm on
the wrong track or maybe I'm just doing it wrong. What I don't want is
for the image to be compressed in any way.
The easy way is to use jpegStillImageNSDataRepresentation: from
AVCaptureStillImageOutput, but like I said I don't want it to be
compressed.
Thanks!
Raw image data from camera
I thought I could work with this, but I finally noticed that I need to get raw image data more directly in a similar way as it is done in "645 PRO".
645 PRO: RAW Redux
The pictures on that site show that they get the raw data before any jpeg compression is done. That is what I want to do. My guess is that I need to transform imageDataSampleBuffer but I don't see a way to do it completely without compression.
"645 PRO" also saves its pictures in TIFF so I think it uses at least one additional library.
I don't want to make a photo app but I need the best quality I get to check for certain features in a picture.
Thanks!
Edit 1:
So after trying and searching in different directions for a while now I decided to give a status update.
The final goal of this project is to check for certain features in a picture which will happen with the help of opencv. But until the app is able to do it on the phone I'm trying to get mostly uncompressed pictures out of the phone to analyse them on the computer.
Therefore I want to save the "NSData instance containing the uncompressed BGRA bytes returned from the camera" I'm able to get with Brad Larson's code as bmp or TIFF file.
As I said in a comment I tried using opencv for this (it will be needed anyway). But the best I could do was turning it into a UIImage with a function from Computer Vision Talks.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);
UIImage *testImag = [UIImage imageWithMat:frame andImageOrientation:UIImageOrientationUp];
//imageWithMat... being the function from Computer Vision Talks which I can post if someone wants to see it
ImageMagick - Approach
Another thing I tried was using ImageMagick as suggested in another post.
But I couldn't find a way to do it without using something like UIImagePNGRepresentationor UIImageJPEGRepresentation.
For now I'm trying to do something with libtiff using this tutorial.
Maybe someone has an idea or knows a much easier way to convert my buffer object into an uncompressed picture.
Thanks in advance again!
Edit 2:
I found something! And I must say I was very blind.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"ocv%d.TIFF", picNum]];
const char* cPath = [filePath cStringUsingEncoding:NSMacOSRomanStringEncoding];
const cv::string newPaths = (const cv::string)cPath;
cv::imwrite(newPaths, frame);
I just have to use the imwrite function from opencv. This way I get TIFF-files around 30 MB directly after the beyer-Polarisation!

Wow, that blog post was something special. A whole lot of words to just state that they get the sample buffer bytes that Apple hands you back from a still image. There's nothing particularly innovative about their approach, and I know a number of camera applications that do this.
You can get at the raw bytes returned from a photo taken with a AVCaptureStillImageOutput using code like the following:
[photoOutput captureStillImageAsynchronouslyFromConnection:[[photoOutput connections] objectAtIndex:0] completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
// Do whatever with your bytes
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}];
This will give you an NSData instance containing the uncompressed BGRA bytes returned from the camera. You can save these to disk or do whatever you want with them. If you really need to process the bytes themselves, I'd avoid the overhead of the NSData creation and just work with the byte array from the pixel buffer.

I could solve it with OpenCV. Thanks to everyone who helped me.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"ocv%d.BMP", picNum]];
const char* cPath = [filePath cStringUsingEncoding:NSMacOSRomanStringEncoding];
const cv::string newPaths = (const cv::string)cPath;
cv::imwrite(newPaths, frame);
I just have to use the imwrite function from opencv. This way I get BMP-files around 24 MB directly after the bayer-filter!

While the core of the answer comes from Brad at iOS: Get pixel-by-pixel data from camera, a key element is completely unclear from Brad's reply. It's hidden in "once you have your capture session configured...".
You need to set the correct outputSettings for your AVCaptureStillImageOutput.
For example, setting kCVPixelBufferPixelFormatTypeKey to kCVPixelFormatType_420YpCbCr8BiPlanarFullRange will give you a YCbCr imageDataSampleBuffer in captureStillImageAsynchronouslyFromConnection:completionHandler:, which you can then manipulate to your heart's content.

as #Wildaker mentioned, for a specific code to work you have to be sure which pixel format the camera is sending you. The code from #thomketler will work if it's set for 32-bit RGBA format.
Here is a code for the YUV default from camera, using OpenCV:
cv::Mat convertImage(CMSampleBufferRef sampleBuffer)
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int w = (int)CVPixelBufferGetWidth(cameraFrame);
int h = (int)CVPixelBufferGetHeight(cameraFrame);
void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0);
cv::Mat img_buffer(h+h/2, w, CV_8UC1, (uchar *)baseAddress);
cv::Mat cam_frame;
cv::cvtColor(img_buffer, cam_frame, cv::COLOR_YUV2BGR_NV21);
cam_frame = cam_frame.t();
//End processing
CVPixelBufferUnlockBaseAddress( cameraFrame, 0 );
return cam_frame;
}
cam_frame should have the full BGR frame. I hope that helps.

Related

image uploaded rotated other way

let image_data = UIImageJPEGRepresentation(self.imagetoadd.image!,0.0)
The image in ios, am using swift 3 to do this is being uploaded rotated.How can I solve such thing?
JPEG images usually contain an EXIF dictionary, here are stored a lot information about how the image was taken, image rotation is one of it.
UIImage instances keeps these information (if the original image has it) as well inside a specific property called imageOrientation.
As far as I remember this information is ripped of by using the method UIImageJPEGRepresentation.
To create a correct data instance with the above information you must use Core Graphics methods, or normalize the rotation before sending the image.
To normalize the image something like that should be enough:
CGImageRef cgRef = imageToSave.CGImage;
UIImage * fixImage = [[UIImage alloc] initWithCGImage:cgRef scale:imageToSave.scale orientation:UIImageOrientationUp];
To keep the rotation information:
CFURLRef url = (__bridge_retained CFURLRef)[NSURL fileURLWithPath:path];//Save data path
NSDictionary * metadataDictionary = [self imageMetadataForPath:pathToOriginalImage];
CFMutableDictionaryRef metadataImage = (__bridge_retained CFMutableDictionaryRef) metadata;
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, image, metadataImage);
if (!CGImageDestinationFinalize(destination)) {
DLog(#"Failed to write image to %#", path);
}
Where the -imageMetadataForPath:
- (NSDictionary*) imageMetadataForPath:(NSString*) imagePath{
NSURL *imageURL = [NSURL fileURLWithPath:imagePath];
CGImageSourceRef mySourceRef = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
NSDictionary * dict = (NSDictionary *) CFBridgingRelease(CGImageSourceCopyPropertiesAtIndex(mySourceRef,0,NULL));
CFRelease(mySourceRef);
return dict;
}
This is a copy and paste from a project of mine, you probably need to do a huge refactoring, also because it is using manual memory management in core foundation and you are using SWIFT. Of course by using this last set of instructions, the backend code must be prepared to deal with image orientation too.
If you want to know more about rotation, here is a link.

Resizing CMSampleBufferRef provided by captureStillImageBracketAsynchronouslyFromConnection:withSettingsArray:completionHandler:

In the app I'm working on, we're capturing photos which need to have 4:3 aspect ratio in order to maximize the field of view we capture. Up untill now we were using AVCaptureSessionPreset640x480 preset, but now we're in need of larger resolution.
As far as I've figured, the only other two 4:3 formats are 2592x1936 and 3264x2448. Since these are too large for our use case, I need a way to downsize them. I looked into a bunch of options but did not find a way (prefereably without copying the data) to do this in an efficient manner without losing the exif data.
vImage was one of the things I looked into but as far as I've figured the data would need to be coppied and the exif data would be lost. Another option was creating an UIImage from data provided by jpegStillImageNSDataRepresentation, scaling it and getting the data back. This approach also seems to strip the exif data.
The ideal approach here would be resizing the buffer contents directly and resizing the photo. Does anyone have an idea how I would go about doing this?
I ended up using ImageIO for resizing purposes. Leaving this piece of code here in case someone runs into the same problem, as I've spent way too much time on this.
This code will preserve the exif data, but will create a copy of the image data. I ran some benchmarks - the execution time for this method is ~0.05sec on iPhone6, using AVCaptureSessionPresetPhoto as the preset for the original photo.
If someone does have a more optimal solution, please leave a comment.
- (NSData *)resizeJpgData:(NSData *)jpgData
{
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)jpgData, NULL);
// Create a copy of the metadata that we'll attach to the resized image
NSDictionary *metadata = (NSDictionary *)CFBridgingRelease(CGImageSourceCopyPropertiesAtIndex(source, 0, NULL));
NSMutableDictionary *metadataAsMutable = [metadata mutableCopy];
// Type of the image (e.g. public.jpeg)
CFStringRef UTI = CGImageSourceGetType(source);
NSDictionary *options = #{ (id)kCGImageSourceCreateThumbnailFromImageIfAbsent: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: #(MAX(FORMAT_WIDTH, FORMAT_HEIGHT)),
(id)kCGImageSourceTypeIdentifierHint: (__bridge NSString *)UTI };
CGImageRef resizedImage = CGImageSourceCreateThumbnailAtIndex(source, 0, (CFDictionaryRef)options);
NSMutableData *destData = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)destData, UTI, 1, NULL);
if (!destination) {
NSLog(#"Could not create image destination");
}
CGImageDestinationAddImage(destination, resizedImage, (__bridge CFDictionaryRef) metadataAsMutable);
// Tell the destination to write the image data and metadata into our data object
BOOL success = CGImageDestinationFinalize(destination);
if (!success) {
NSLog(#"Could not create data from image destination");
}
if (destination) {
CFRelease(destination);
}
CGImageRelease(resizedImage);
CFRelease(source);
return destData;
}

How do I generate 24-bit True Color Animated Gifs in iOS?

I want to generate a true color animated Gif from a couple of PNG files represented as base64 string. I found this post and did something similar. I have an array with the dataUrls:
NSArray* imageDataUrls; // array with the data urls without data:image/png;base64, prefix
Here is what I did:
NSDictionary *fileProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFLoopCount: #0, // 0 means loop forever
}
};
NSDictionary *frameProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFDelayTime: #0.4f, // a float (not double!) in seconds, rounded to centiseconds in the GIF data
}
};
NSURL *documentsDirectoryURL = [[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:nil];
NSURL *fileURL = [documentsDirectoryURL URLByAppendingPathComponent:#"animated.gif"];
CFMutableDataRef destinationData = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(destinationData, kUTTypeGIF, kFrameCount, NULL);
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)fileProperties);
NSData* myImageData;
UIImage *myImage = [UIImage alloc];
for (NSUInteger i = 0; i < kFrameCount; i++) {
#autoreleasepool {
myImageData = [NSData dataFromBase64String:[imageDataUrls objectAtIndex:i]];
myImage = [myImage initWithData: myImageData];
CGImageDestinationAddImage(destination, myImage.CGImage, (__bridge CFDictionaryRef)frameProperties);
}
}
myImageData = nil;
myImage = nil;
CFRelease(destination);
NSData* data = nil;
data = (__bridge NSData *)destinationData;
Finally, I send the gif image as base64EncodedString back to the phonegap container.
// send back gif image
CDVPluginResult* pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString: [data base64EncodedString]];
It works good but the quality of the resulting gif image is bad. This is because it has only 256 colors.
Here is the original png image:
Here is a screenshot of the generated gif image:
How do I get the same quality as I imported, i.e., how can I raise the quality level of the generated gif? How do I generate true color gifs on iOS?
GIFs are not designed to store true-color data, and they are also poorly suited for animations1. Since this is such an unusual use of GIFs, you will have to write a lot of your own code.
Break each frame into rectangular chunks, where each chunk contains at most 256 distinct colors. The easiest way to do this is to use 16x16 chunks.
Convert each chunk to an indexed image.
Add each chunk to the GIF. For the first chunk in a frame, use the frame delay. For other chunks in a frame, use a delay of 0.
Done. You will have to familiarize yourself with the GIF specification, which is freely available online (GIF89a specification at W3.org, see section 23). You will also need to find an LZW compressor, which is not too hard to find. The animation will also use an obscene amount of storage: including base64 conversion, I estimate about 43 bits/pixel, or about 1.2 Gbit/s for 720p video, which is about 400x as much storage as you would use for high-quality MPEG4 or WebM, and probably about 3x as much storage as the PNGs would require. The storage and bandwidth requirements will likely incur undesirable costs for hosts and clients, unless the animations are very short and small.
Note that this will not allow you to use alpha transparency, this is a hard limitation of the GIF format.
Opinion
The idea of putting high quality animations in a GIF is absurd in the extreme, even though it is possible. It is especially absurd given the available alternatives:
If you are targeting modern browsers or mobile devices, MPEG4 (support matrix) and WebM (support matrix) are the obvious choices. Between the two formats, only Opera Mini supports neither.
If you are targeting older browsers or less-capable devices, or if you cannot afford MPEG4 encoding, you can encode the frames as individual JPEG or PNG images. Bundle these with a JSON payload with the timing, and use JavaScript or other client-side scripting to switch between animation frames. This works surprisingly well.
Notes
1 From the GIF 89a specification:
Animation - The Graphics Interchange Format is not intended as a platform for
animation, even though it can be done in a limited way.

Most memory efficient way to save a photo to disk on iPhone?

From profiling with Instruments I have learned that the way I am saving images to disk is resulting in memory spikes to ~60MB. This results in the App emitting low memory warnings, which (inconsistently) leads crashes on the iPhone4S running iOS7.
I need the most efficient way to save an image to disk.
I am currently using this code
+ (void)saveImage:(UIImage *)image withName:(NSString *)name {
NSData *data = UIImageJPEGRepresentation(image, 1.0);
DLog(#"*** SIZE *** : Saving file of size %lu", (unsigned long)[data length]);
NSFileManager *fileManager = [NSFileManager defaultManager];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *fullPath = [documentsDirectory stringByAppendingPathComponent:name];
[fileManager createFileAtPath:fullPath contents:data attributes:nil];
}
Notes:
Reducing the value of the compressionQuality argument in UIImageJPEGRepresentation does not reduce the memory spike significantly enough.
e.g.
compressionQuality = 0.8, reduced the memory spike by 3MB on average over 100 writes.
However, it does reduce the size of the data on disk (obviously)but this does not help me.
UIImagePNGRepresentation in place of UIImageJPEGRepresentation is worse for this. It is slower and results in higher spikes.
Is it possible that this approach with ImageIO would be more efficient? If so why?
If anyone has any suggestions it would be great. Thanks
Edit:
Notes on some of the points outlined in the questions below.
a) Although I was saving multiple images, I was not saving them in a loop. I did a bit of reading around and testing and found that an autorelease pool wouldn't help me.
b) The photos were not 60Mb in size each. They were photos taken on the iPhone 4S.
With this in mind I went back to trying to overcome what I thought the problem was; the line NSData *data = UIImageJPEGRepresentation(image, 1.0);.
The memory spikes that were causing the crash can be seen in the screenshot below. They corresponded to when UIImageJPEGRepresentation was called. I also ran Time Profiler and System Usage which pointed me in the same direction.
Long story short, I moved over to AVFoundation and took the photo image data using
photoData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
Which returns an object of type NSData, I then used this as the data to write using NSFileManager.
This removes the spikes in memory completely.
i.e
[self saveImageWithData:photoData];
where
+ (void)saveImageWithData:(NSData *)imageData withName:(NSString *)name {
NSData *data = imageData;
DLog(#"*** SIZE *** : Saving file of size %lu", (unsigned long)[data length]);
NSFileManager *fileManager = [NSFileManager defaultManager];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *fullPath = [documentsDirectory stringByAppendingPathComponent:name];
[fileManager createFileAtPath:fullPath contents:data attributes:nil];
}
PS: I have not put this as an answer to the question incase people feel it does not answer the Title "Most memory efficient way to save a photo to disk on iPhone?". However, if the consensus is that it should be I can update it.
Thanks.
Using UIImageJPEGRepresentation requires that you have the original and final image in memory at the same time. It may also cache the fully rendered image for a while, which would use a lot of memory.
You could try using a CGImageDestination. I do not know how memory efficient it is, but it has the potential to stream the image directly to disk.
+(void) writeImage:(UIImage *)inImage toURL:(NSURL *)inURL withQuality:(double)inQuality {
CGImageDestinationRef destination = CGImageDestinationCreateWithURL( (CFURLRef)inURL , kUTTypeJPEG , 1 , NULL );
CFDictionaryRef properties = (CFDictionaryRef)[NSDictionary dictionaryWithObject:[NSNumber numberWithDouble:inQuality] forKey:kCGImageDestinationLossyCompressionQuality];
CGImageDestinationAddImage( destination , [inImage CGImage] , properties );
CGImageDestinationFinalize( destination );
CFRelease( destination );
}
Are your images actually 60MB compressed, each? If they are, there's not a lot you can do if you want to save them as a single JPEG file. You can try rendering them down to smaller images, or tile them and save them to separate files.
I don't expect your ImageIO code snippet to improve anything. If there were a two-line fix, then UIImageJPEGRepresentation would be using it internally.
But I'm betting that you don't get 60MB from a single image. I'm betting you get 60MB from multiple images saved in a loop. And if that's the case, then there is likely something you can do. Put an #autoreleasepool{} inside your loop. It is quite possible that you're accumulating autoreleased objects, and that's leading to the spike. Adding a pool inside your loop allows it to drain.
Try to use NSAutoReleasePool and drain the pool once u finish writing the data.

How to get Bytes from CMSampleBufferRef , To Send Over Network

Am Captuing video using AVFoundation frame work .With the help of Apple Documentation http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html%23//apple_ref/doc/uid/TP40010188-CH5-SW2
Now i did Following things
1.Created videoCaptureDevice
2.Created AVCaptureDeviceInput and set videoCaptureDevice
3.Created AVCaptureVideoDataOutput and implemented Delegate
4.Created AVCaptureSession - set input as AVCaptureDeviceInput and set output as AVCaptureVideoDataOutput
5.In AVCaptureVideoDataOutput Delegate method
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
i got CMSamplebuffer and Converted into UIImage And tested to print UIImageview using
[self.imageView performSelectorOnMainThread:#selector(setImage:) withObject:image waitUntilDone:YES];
Every thing went well up to this........
MY Problem IS,
I need to send video frames through UDP Socket .even though following one is bad idea i tried ,UIImage to NSData and Send via UDP Pocket. BUt got so Delay in video Processing.Mostly problem because of UIImage to NSDate
So Please GIve me Solution For my problem
1)Any way to convert CMSampleBUffer or CVImageBuffer to NSData ??
2)Like Audio Queue Service and Queue for Video to store UIImage and do UIImage to NSDate
And Sending ???
if am riding behind the Wrong Algorithm Please path me in write direction
Thanks In Advance
Here is code to get at the buffer. This code assumes a flat image (e.g. BGRA).
NSData* imageToBuffer( CMSampleBufferRef source) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return [data autorelease];
}
A more efficient approach would be to use a NSMutableData or a buffer pool.
Sending a 480x360 image every second will require a 4.1Mbps connection assuming 3 color channels.
Use CMSampleBufferGetImageBuffer to get CVImageBufferRef from the sample buffer, then get the bitmap data from it with CVPixelBufferGetBaseAddress. This avoids needlessly copying the image.

Resources