I'm having a bit of a headache with this. I am using the iPhone to camera to capture live images using AVFoundation. In the captureOutput function I am creating a UIImage from the sampleBuffer as per apples developer notes - if I save this image to the camera roll I can view it and it looks as it expected (I am not doing this every time captureOutput is called!). However I do not want to save it - instead I want to have a look at some of its pixel values.
So again using apples notes I am get pixel values at X & Y points. I also noted that the colour order is BGRA (not RGBA) so I can get these values and they all look OK only they appear to be wrong...
If I save the exact same image to the camera roll, email it to myself, then put it into my xcode project so I can create a UIImage from this in my App which I can then pass through the exact same set of methods I get a different set of figures for the pixel RGB data (I did switch back to RGBA for this image but even allowing for BGRA being wrong the numbers don't match) - to confirm I created the same routines on a PC using C# used the same image and got the same figures.
Is there a difference between a UIImage created in memory and one that is then saved and reloaded?
Thanks for any advice!
Related
I've been trying to implement some code where, given an image with some depth data, it specifically returns the part of the image that is focused/closest to the camera. If it's a person, then their face, if it's a plant then the branch. Effectively I'm trying to get the part of the image which would be focused on if the image was taken using Portrait mode on the camera app.
I've been reading this documentation but I've not been able to think of a way to manipulate the data here. My guess would be to use embedsDepthDataInPhoto and then use the depth data in some way to get rid of all other parts of the data if they are a certain distance away or greater from the camera. I'm quite new to this so any help would be greatly appreciated
For example after taking the image, the app would tell you the relative amount of red, blue, green, and yellow present in the picture and how intense each color is.
That's super specific I know, but I would really like to know if it's possible and if anyone has any idea how to go about that.
Thanks!
Sure it's possible. You've have to load the image into a UIImage, then get the underlying CGImage, and get a pointer to the pixel data. If you average the RGB values of all the pixels you're likely to get a pretty muddy result, though, unless you're sampling an image with large areas of strong primary colors.
Erica Sadun's excellent iOS Developer Cookbook series has a section on sampling pixel image data that shows how it's done. In recent versions there is a "core" and an "extended" volume. I think it's in the Core iOS volume. My copy of Mac iBooks is crashing repeatedly right now, so I can't find it for you. Sorry about that.
EDIT:
I got it to open on my iPad finally. It is in the Core volume, in recipe 1-6, "Testing Touches Against Bitmap Alpha Levels." As the title implies, that recipe looks at an image's alpha levels to figure out if you've tapped on an opaque image pixel or missed the image by tapping on a transparent pixel. You'll need to adapt that code to come up with the average color for an image, but Erica's code shows the hard part - getting and interpreting the bytes of image data. That book is all in Objective-C. Post a comment if you have trouble figuring it out.
I'm doing a photo app and sometimes the lighting is off in certain areas and the picture isn't clear. I was wondering if there was a feature that can auto adjust the brightness, contrast, exposure, saturation of a picture like in photoshop.
I don't want to manually adjust images like the sample code given by apple:
https://developer.apple.com/library/ios/samplecode/GLImageProcessing/Introduction/Intro.html
I want something that will auto adjust or correct the photo
As an alternative you could use AVFoundation to make your implementation of the camera and set the ImageQuality to high and the autofocus or tap to focus feature. Otherwise, I am almost certain you cannot set this properties, The UIImagePicker controller included in the SDK is really expensive memory wise and gives you an image instead of raw data (another benefit of using AVFoundation). This is a good tutorial for this in case you would like to check it out:
http://www.musicalgeometry.com/?p=1297
Apparently someone has created it on Github: https://github.com/proth/UIImage-PRAutoAdjust
Once imported, I used it the following:
self.imageView.image = [self.imageView.image autoAdjustImage];
I found this post Draw Lines Load From Plist in iphone sdk about saving and loading your Freehand Drawing from plist.
In my app I am able to draw on a captured photo and save it as a new image in my photo library. But I want to just save the photo without the drawing to my photo library and be able to load my Freehand Drawing Lines manually whenever I load the original photo.
In the above named link he saves his linecoordinates in a plist. Is it effective to create for every photo a new plist?? Any ideas? Please help :(
I would really appreciate any ideas you might have! Thank you
The reason he uses line coordinates is because the mobile devices receive the data the same way (relative to previous or absolute). So unless the size of the line coordinates is bigger than a compressed pixel array (png, jpg, etc), this is the best way. For simplicity though, I would just keep that format.
If size is a problem for you, just use zlib (http://www.zlib.net/) to compress/uncompress the data - it's available on all mobile devices.
You could also use different algorithms to reduce the size of the data prior to zlib compression - for instance, if all coordinates are relative to the previous one, the values would be smaller meaning you could encode it in a smaller type of data - integer to byte for instance - or a dynamic one (ie: 1 byte + 0 if done 1 if another byte) + ....).
I´m trying to develop an app to help people with vision diseases and one of the function is to recognize the color of the object that user is pointing.
I´ve read this article about how to get the pixel color from an UIImage
Get Pixel color of UIImage
Anyone have a idea how i can get this pixel color from camera?
Marcelo, there is quite a bit of setup to get the pixel color from camera. You will need to get up to speed with AVFoundation at:
http://developer.apple.com/library/mac/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html
After you have done that, you should pull the sample code Rosy Writer:
http://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
This code is meant to take a live camera image and change one of the pixel colors.
I recommend you spend time on those 2 and then come back with more specific questions as you start build the app.
Hope this helps.