Luminosity from iOS camera - ios

I'm trying to make an application and i have to calculate the brightness of the camera like this application : http://itunes.apple.com/us/app/megaman-luxmeter/id455660266?mt=8
I found this document : http://b2cloud.com.au/tutorial/obtaining-luminosity-from-an-ios-camera
But i don't know how to adapt it to the camera directly and not an image. Here is my code :
Image = [[UIImagePickerController alloc] init];
Image.delegate = self;
Image.sourceType = UIImagePickerControllerCameraCaptureModeVideo;
Image.showsCameraControls = NO;
[Image setWantsFullScreenLayout:YES];
Image.view.bounds = CGRectMake (0, 0, 320, 480);
[self.view addSubview:Image.view];
NSArray* dayArray = [NSArray arrayWithObjects:Image,nil];
for(NSString* day in dayArray)
{
for(int i=1;i<=2;i++)
{
UIImage* image = [UIImage imageNamed:[NSString stringWithFormat:#"%#%d.png",day,i]];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p<image.size.width*image.size.height*4;p+=4)
{
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
}
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(#"%# (%d) = %f",day,i,totalLuminance);
}
}
Here are the issues :
"Instance method '-rgbaPixels' not found (return type defaults to 'id')"
&
"Incompatible pointer types initializing 'unsigned char *' with an expression of type 'id'"
Thanks a lot ! =)

Rather than doing expensive CPU-bound processing of each pixel in an input video frame, let me suggest an alternative approach. My open source GPUImage framework has a luminosity extractor built into it, which uses GPU-based processing to give live luminosity readings from the video camera.
It's relatively easy to set this up. You simply need to allocate a GPUImageVideoCamera instance to represent the camera, allocate a GPUImageLuminosity filter, and add the latter as a target for the former. If you want to display the camera feed to the screen, create a GPUImageView instance and add that as another target for your GPUImageVideoCamera.
Your luminosity extractor will use a callback block to return luminosity values as they are calculated. This block is set up using code like the following:
[(GPUImageLuminosity *)filter setLuminosityProcessingFinishedBlock:^(CGFloat luminosity, CMTime frameTime) {
// Do something with the luminosity
}];
I describe the inner workings of this luminosity extraction in this answer, if you're curious. This extractor runs in ~6 ms for a 640x480 frame of video on an iPhone 4.
One thing you'll quickly find is that the average luminosity from the iPhone camera is almost always around 50% when automatic exposure is enabled. This means that you'll need to supplement your luminosity measurements with exposure values from the camera metadata to obtain any sort of meaningful brightness measurement.

Why do you place the camera image into an NSArray *dayArray? Five lines later you remove it from that array but treat the object as an NSString. An NSString does not have rgbaPixels. The example you copy-pasted has an array of filenames corresponding to pictures taken at different times of the day. It then opens those image files and performs the analysis of luminosity.
In your case, there is no file to read. Both outer for loops, i.e. on day and i will have to go away. You already got access to the Image provided through the UIImagePickerController. Right after adding the subview, you could in principle access pixels as in unsigned char *pixels = [Image rgbaPixels]; where Image is the image you got from UIImagePickerController.
However, this may not be what you want to do. I imagine that your goal is rather to show the UIImagePickerController in capture mode and then to measure luminosity continuously. To this end, you could turn Image into a member variable, and then access its pixels repeatedly from a timer callback.

You can import below class from GIT to resolve this issue.
https://github.com/maxmuermann/pxl
Add UIImage+Pixels.h & .m files into project. Now try to run.

Related

iOS Redrawing image to prevent deferred decompression resulting in a bigger image

I've noticed some people redraw images on a CGContext to prevent deferred decompression and this has caused a bug in our app.
The bug is that the size of the image professes to remain the same but the CGImageDataProvider data has extra bytes appended to it.
For example, we have a 797x500 PNG image downloaded from the Internet, and the AsyncImageViewredraws and returns the redrawn image.
Here is the code:
UIImage *image = [[UIImage alloc] initWithData:data];
if (image)
{
// Log to compare size and data length...
NSLog(#"BEFORE: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Original code from AsyncImageView
//redraw to prevent deferred decompression
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Log to compare size and data length...
NSLog(#"AFTER: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Some other code...
}
The log shows as follows:
BEFORE: 797.000000 500.000000
LEN 1594000
AFTER: 797.000000 500.000000
LEN 1600000
I decided to print each byte one by one, and sure enough there were twelve 0s appended for each row.
Basically, the redrawing was causing the image data to be that of a 800x500 image. Because of this our app was looking at the wrong pixel when it wanted to look at the 797 * row + columnth pixel.
We're not using any big images so deferred decompression doesn't pose any problems, but should I decide to use this method to redraw images, there's a chance I might introduce a subtle bug.
Does anyone have a solution to this? Or is this a bug introduced by Apple and we can't really do anything?
As you've discovered, rows are padded out to a convenient size. This is generally to make vector algorithms more efficient. You just need to adapt to that layout if you're going to use CGImage this way. You need to call CGImageGetBytesPerRow to find out the actual number of bytes allocated, and then adjust your offsets based on that (bytesPerRow * row + column).
That's probably best for you, but if you need to get rid of the padding, you can do that by creating your own CGBitmapContext and render into it. That's a heavily covered topic around Stack Overflow if you're not familiar with it. For example: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?

GPUImage apply filter to a buffer of images

In GPUImage there are some filters that works only for a stream of frames from a camera, for instance the low pass filter, or the high pass filter, but there are plenty of them.
I'm trying to create a buffer of UIImages that with a fixed timerate make possible to apply those filters also between just 2 images, and that for each pair of image produces a single filtered image. Something like that:
FirstImage+SecondImage-->FirstFilteredImage
SecondImage+ThirdImage-->SecondFilteredImage
I've found that filters that works with frames use a GPUImageBuffer, that is a subclass of GPUImageFilter (most probably just to inherit some methods and protocols) that loads a passthrough fragment shader. From what I understood this is a buffer that keeps incoming frames but already "texturized", textures are generated by binding the texture in the current context.
I've found also a -conserveMemoryForNextFrame that sounds good for what I want to achieve, but I didn't get how is working.
Is it possible to do that? in which method images are converted in texture?
I made something close about what I'd like to achieve, but in first instance I must say that probably I've misunderstood some aspects about current filters functionalities.
I thought that some filters could make some operations taking the time variable into account in their shader. That's because when I saw the low pass filter and hight pass filter I've instantly thought about time. The reality seems to be different, they take into account time but it doesn't seems that this affect the filtering operations.
Since I'm developing by myself a time-lapse application, that saves single images and that reassemble them into a different timeline to make a video without audio, I imagined that a filters function of time could be fun to apply to the subsequent frames. This is the reason about why I posted this question.
Now the answer: to apply a double input filter to still images you must do like in the snippet:
[sourcePicture1 addTarget:twoinputFilter];
[sourcePicture1 processImage];
[sourcePicture2 addTarget:twoinputFilter];
[sourcePicture2 processImage];
[twoinputFilter useNextFrameForImageCapture];
UIImage * image = [twoinputFilter imageFromCurrentFramebuffer];
If you forget to call the -useNextFrameForImageCapture the returned image will be nil, due to the buffer reuse.
Not happy I thought that maybe in the future the good Brad will make something like this, so I've created a GPUImagePicture subclass, that instead of returning kCMTimeIvalid to the appropriate methods returns a new ivar that contains the frame CMTime called -frameTime.
#interface GPUImageFrame : GPUImagePicture
#property (assign, nonatomic) CMTime frameTime;
#end
#implementation GPUImageFrame
- (BOOL)processImageWithCompletionHandler:(void (^)(void))completion;
{
hasProcessedImage = YES;
// dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_FOREVER);
if (dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
{
return NO;
}
runAsynchronouslyOnVideoProcessingQueue(^{
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setCurrentlyReceivingMonochromeInput:NO];
[currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
[currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:_frameTime atIndex:textureIndexOfTarget];
}
dispatch_semaphore_signal(imageUpdateSemaphore);
if (completion != nil) {
completion();
}
});
return YES;
}
- (void)addTarget:(id<GPUImageInput>)newTarget atTextureLocation:(NSInteger)textureLocation;
{
[super addTarget:newTarget atTextureLocation:textureLocation];
if (hasProcessedImage)
{
[newTarget setInputSize:pixelSizeOfImage atIndex:textureLocation];
[newTarget newFrameReadyAtTime:_frameTime atIndex:textureLocation];
}
}

GPUImage Harris Corner Detection on an existing UIImage gives a black screen output

I've successfully added a crosshair generator and harris corner detection filter onto a GPUImageStillCamera output, as well as on live video from GPUImageVideoCamera.
I'm now trying to get this working on a photo set on a UIImageView, but continually get a black screen as the output. I have been reading the issues listed on GitHub against Brad Larson's GPUImage project, but they seemed to be more in relation to blend type filters, and following the suggestions there I still face the same problem.
I've tried altering every line of code to follow various examples I have seen, and to follow Brad's example code in the Filter demo projects, but the result is always the same.
My current code is, once I've taken a photo (which I check to make sure it is not just a black photo at this point):
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
GPUImageCrosshairGenerator *crossGen = [[GPUImageCrosshairGenerator alloc] init];
crossGen.crosshairWidth = 15.0;
[crossGen forceProcessingAtSize:self.photoView.frame.size];
[cornerFilter1 setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime, BOOL endUpdating)
{
[crossGen renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
}];
[stillImageSource addTarget:crossGen];
[crossGen addTarget:cornerFilter1];
[crossGen prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [crossGen imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I've tried prepareForImageCapture on both filters, on neither, adding the two targets in the opposite order, calling imageFromCurrentlyProcessedOutput on either filter, I've tried it without the crosshair generator, I've tried using local variables and variables declared in the .h file. I've tried with and without forceProcessingAtSize on each of the filters.
I can't think of anything else that I haven't tried to get the output. The app is running on iPhone 7.0, in Xcode 5.0.1. The standard filters work on the photo, e.g. the simple GPUImageSobelEdgeDetectionFilter included in the SimpleImageFilter test app.
Any suggestions? I am saving the output to the camera roll so I can check it's not just me failing to display it correctly. I suspect it's a stupid mistake somewhere but am at a loss as to what else to try now.
Thanks.
Edited to add: the corner detection is definitely working, as depending on the threshold I set, it returns between 6 and 511 corners.
The problem with the above is that you're not chaining filters in the proper order. The Harris corner detector takes in an input image, finds the corners within it, and provides the callback block to return those corners. The GPUImageCrosshairGenerator takes in those points and creates a visual representation of the corners.
What you have in the above code is image->GPUImageCrosshairGenerator-> GPUImageHarrisCornerDetectionFilter, which won't really do anything.
The code in your answer does go directly from the image to the GPUImageHarrisCornerDetectionFilter, but you don't want to use the image output from that. As you saw, it produces an image where the corners are identified by white dots on a black background. Instead, use the callback block, which processes that and returns an array of normalized corner coordinates for you to use.
If you need these to be visible, you could then take that array of coordinates and feed it into the GPUImageCrosshairGenerator to create visible crosshairs, but that image will need to be blended with your original image to make any sense. This is what I do in the FilterShowcase example.
I appear to have fixed the problem, with trying different variations again and now the returned image is black but there are white dots in the locations of the found corners. I removed the GPUImageCrosshairGenerator altogether. The code that got this working was:
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
[stillImageSource addTarget:cornerFilter1];
[cornerFilter1 prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [cornerFilter1 imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I do not need to add the crosshairs for the purpose of my app - I simply want to parse the locations of the corners to provide some cropping, but I required the dots to be visible to check the corners were being detected correctly. I'm not sure if the white dots on black are the expected outcome of this filter, but I presume so.
Updated code for Swift 2:
let stillImageSource: GPUImagePicture = GPUImagePicture(image: image)
let cornerFilter1: GPUImageHarrisCornerDetectionFilter = GPUImageHarrisCornerDetectionFilter()
cornerFilter1.threshold = 0.1
cornerFilter1.forceProcessingAtSize(image.size)
stillImageSource.addTarget(cornerFilter1)
stillImageSource.processImage()
let tmp: UIImage = cornerFilter1.imageByFilteringImage(image)

AVCaptureSession "output sample buffer" reading pixel coordinate gives a wrong color

I use AVCaptureSession to initiate a video capture session and read pixel colors from video frames. The video setting is like this.
NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
kCVPixelBufferPixelFormatTypeKey,
nil];
with a delegate method below to get sample buffer, which I will later read pixel colors.
- (void)captureOutput:(AVCaptureOutput *)captureOutput // captureOutput is only the AVCaptureVideoDataOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSAutoreleasePool * pool = [NSAutoreleasePool new]; // instrument tells it leaks
/******* START CALCULATION *******/
imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // lock image buffer
buffer_baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); // raw buffer data BGRA
...
The variable buffer_baseAddress is an array that stores pixel colors, in which under kCVPixelFormatType_32BGRA setting, the array will arrange like
[B][G][R][A][B][G][R][A][B]...[B][G][R][A]. So to get a color at pixel at some coordinate, I'll have to figure out 3 indices in the buffer. So at (x,y) = (10,0), BGR will be at index 40, 41, and 42.
Here is the problem. The first row (y == 0) of the sample buffer seems to give correct color at all times. But when I move to the second row onward (y > 0), I got wrong colors on some presets, or using front/back camera. It's like the buffer has some unknown, extra data appended at the end of each row, in a certain setting. Luckily from my experiments, I find that sample buffers got shifted by some amount in each row when I use AVCaptureSessionPresetHigh on back Camera, and AVCaptureSessionPresetMedium on both front/back cameras. I remembered setting some rowPadding = 0 in one of AVCaptureSession class doesn't help either. (I'm sorry I forgot what exact variable it was. It was several months back.)
What causes this problem? And what can I do to solve this?
Have a look at the section Converting a CMSampleBuffer to a UIImage in this page from the Apple Docs. I haven't tried it but it shows how to get the bytesPerRow - including padding - of the buffer.
It is normal for an image buffer to be padded to an optimal value, whether you are using CoreVideo, CoreImage, Quartz, Quicktime etc.. there will always be a way to find out what it is.
This is my current solution. Not the best one but it helps me get across it. I just test each of every AVCaptureSessionPreset and see which one got wrong pixels read. Then I try to guess the size of that extra padding and use it when I calculate the sample buffer index. This magic number is 8, took me days to find this out. So hopefully this will be helpful to someone, at least, as a workaround. :-)
// Look like this magic padding is affected by AVCaptureSessionPreset and front/back camera
if ([[self currentPresetSetting] isEqualToString:AVCaptureSessionPresetHigh] && ![self isUsingFrontFacingCamera]) {
buffer_rowPixelPadding = 8;
}
else if ([[self currentPresetSetting] isEqualToString:AVCaptureSessionPresetMedium]) {
buffer_rowPixelPadding = 8;
}
else {
buffer_rowPixelPadding = 0; // default
}

Should retain count increase after an image rotation?

I'm using the following code to rotate an image
http://www.platinumball.net/blog/2010/01/31/iphone-uiimage-rotation-and-scaling/
that's one of the few image transformations that I do before uploading an image to the server, I also have some other transformations: normalize, crop, resize.
Each one of the transformations returns an (UIImage*) and I add those functions using a category. I use it like this:
UIImage *img = //image from camera;
img = [[[img normalize] rotate] scale] resize];
[upload img];
After selecting 3~4 photos from the camera and executing the same code each time I get a Memory Warning message in XCode.
I'm guessing I have a memory leak somewhere (even though im using ARC). I'm not very experienced using the xCode debugging tools, so I started printing the retain count after each method.
UIImage *img = //image from camera;
img = [img normalize];
img = [img rotate]; // retain count increases :(
img = [img scale];
img = [img resize];
The only operation that increases the retain count is the rotation. Is this normal?
The only operation that increases the retain count is the rotation. Is this normal?
It's quite possible that the UIGraphicsGetImageFromCurrentImageContext() call in your rotate function ends up retaining the image. If so, it almost certainly also autoreleases the image in keeping with the normal Cocoa memory management rules. Either way, you shouldn't worry about it. As long as your rotate function doesn't itself contain any unbalanced retain (or alloc, new, or copy) calls, you should expect to be free of leaks. If you do suspect a leak, it's better to track it down with Instruments than by watching retainCount yourself.

Resources