I have this NSMutableArray of UIImage named camImages. Then I want to get the scale property of an image, but I can't seem to access that using
[[camImages objectAtIndex:i] scale] //doesn't return the desired scale property
[camImages.objectAtIndex:i].scale //doesn't work (Error: Property 'scale' not found on object of type 'id')
whereas it is possible to get the property if I have a single UIImage
UIImage *img;
img.scale //desired property
I am newbie to iOS & Objective-C, how can I get the desired property? Thanks in advance!
EDIT:
[[camImages objectAtIndex:i] scale] will return
NSDecimalNumberBehaviors
Scale
Returns the number of digits allowed after the decimal separator. (required)
(short)scale Return Value The number of digits allowed after the decimal separator.
whereas the desired scale is of CGFloat type:
UIImage
scale
The scale factor of the image. (read-only)
#property(nonatomic, readonly) CGFloat scale Discussion If you load an
image from a file whose name includes the #2x modifier, the scale is
set to 2.0. You can also specify an explicit scale factor when
initializing an image from a Core Graphics image. All other images are
assumed to have a scale factor of 1.0.
If you multiply the logical size of the image (stored in the size
property) by the value in this property, you get the dimensions of the
image in pixels.
Make your code easier to read and debug:
UIImage *image = camImages[i];
CGFloat scale = image.scale;
if you have two scale method with different signature, compiler may not able to choose the correct signature to use, so you have to tell compiler the type of the object so it can find the correct one
if you really want one line solution
[((UIImage *)[camImages objectAtIndex:i]) scale];
((UIImage *)[camImages objectAtIndex:i]).scale;
but use #rmaddy answer for readability
Related
I've noticed some people redraw images on a CGContext to prevent deferred decompression and this has caused a bug in our app.
The bug is that the size of the image professes to remain the same but the CGImageDataProvider data has extra bytes appended to it.
For example, we have a 797x500 PNG image downloaded from the Internet, and the AsyncImageViewredraws and returns the redrawn image.
Here is the code:
UIImage *image = [[UIImage alloc] initWithData:data];
if (image)
{
// Log to compare size and data length...
NSLog(#"BEFORE: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Original code from AsyncImageView
//redraw to prevent deferred decompression
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Log to compare size and data length...
NSLog(#"AFTER: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Some other code...
}
The log shows as follows:
BEFORE: 797.000000 500.000000
LEN 1594000
AFTER: 797.000000 500.000000
LEN 1600000
I decided to print each byte one by one, and sure enough there were twelve 0s appended for each row.
Basically, the redrawing was causing the image data to be that of a 800x500 image. Because of this our app was looking at the wrong pixel when it wanted to look at the 797 * row + columnth pixel.
We're not using any big images so deferred decompression doesn't pose any problems, but should I decide to use this method to redraw images, there's a chance I might introduce a subtle bug.
Does anyone have a solution to this? Or is this a bug introduced by Apple and we can't really do anything?
As you've discovered, rows are padded out to a convenient size. This is generally to make vector algorithms more efficient. You just need to adapt to that layout if you're going to use CGImage this way. You need to call CGImageGetBytesPerRow to find out the actual number of bytes allocated, and then adjust your offsets based on that (bytesPerRow * row + column).
That's probably best for you, but if you need to get rid of the padding, you can do that by creating your own CGBitmapContext and render into it. That's a heavily covered topic around Stack Overflow if you're not familiar with it. For example: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.
This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.
I'm trying to get original image from ALAsset and find that the scale property of ALAssetRepresentation always returns 1.0. So I wonder is there a situation that the property will return other value like 2.0 ?
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation] ;
CGImageRef imgRef = assetRepresentation.fullResolutionImage ;
UIImage *image = [UIImage imageWithCGImage:imgRef] ;
After retina displays were introduced physical resolution was doubled but for API calls it was remained the same. So in some methods and functions (see UIGraphicsBeginImageContextWithOptions for example) was added additional argument 'scale'. I do not know why [ALAssetRepresentation scale] description is so poor
Returns the representation's scale.
but you can look at UIScreen.scale description
This value reflects the scale factor needed to convert from the
default logical coordinate space into the device coordinate space of
this screen. The default logical coordinate space is measured using
points. For standard-resolution displays, the scale factor is 1.0 and
one point equals one pixel. For Retina displays, the scale factor is
2.0 and one point is represented by four pixels.
I think [ALAssetRepresentation scale] should be 2.0 if you will run this code on device with retina display.
I am using UIImageView and I have to set more than one image as a background.
All the images have transparent background and contains any one symbol at their corners. Images are saved based on the conditions. Also there is possibility that there can be more than one images too.
Currently I am setting images, but I can view only the last image. So I want that all the images should be displayed together.
Please do let me know if there is any other way through which I can convert multiple images into single image.
Any help will be appreciated
Thanks in advance
You can draw the images with blend modes. For example, if you have a UIImage, you can call drawAtPoint:blendMode:alpha:. You'd probably want to use kCGBlendModeNormal as the blend mode in most cases.
I had created a function which gets array of images and will return single image. My code is below:
-(UIImage *)blendImages:(NSMutableArray *)array{
UIImage *img=[array objectAtIndex:0];
CGSize size = img.size;
UIGraphicsBeginImageContext(size);
for (int i=0; i<array.count; i++) {
UIImage* uiimage = [array objectAtIndex:i];
[uiimage drawAtPoint:CGPointZero blendMode:kCGBlendModeNormal alpha:1.0];
}
return UIGraphicsGetImageFromCurrentImageContext();
}
Hope this will help others too.
You should composite your images into one -- especially because they have alpha channels.
To do this, you could
use UIGraphicsBeginImageContextWithOptions to create the image at the destination size (scale now, rather than when drawing to the screen and choose the appropriate opacity)
Render your images to the context using CGContextDrawImage
then call UIGraphicsGetImageFromCurrentImageContext to get the result as a UIImage, which you set as the image of the image view.
You can use:
typedef enum _imageType{
image1,
image2,
...
imageN
}imageType;
and declare in #interface
imageType imgType;
in .h file.
And in the.m file
-(void)setImageType:(imageType)type{
imgType = type;
}
and then you can use function setImageType: to set any images what you want.
I'm trying to make an application and i have to calculate the brightness of the camera like this application : http://itunes.apple.com/us/app/megaman-luxmeter/id455660266?mt=8
I found this document : http://b2cloud.com.au/tutorial/obtaining-luminosity-from-an-ios-camera
But i don't know how to adapt it to the camera directly and not an image. Here is my code :
Image = [[UIImagePickerController alloc] init];
Image.delegate = self;
Image.sourceType = UIImagePickerControllerCameraCaptureModeVideo;
Image.showsCameraControls = NO;
[Image setWantsFullScreenLayout:YES];
Image.view.bounds = CGRectMake (0, 0, 320, 480);
[self.view addSubview:Image.view];
NSArray* dayArray = [NSArray arrayWithObjects:Image,nil];
for(NSString* day in dayArray)
{
for(int i=1;i<=2;i++)
{
UIImage* image = [UIImage imageNamed:[NSString stringWithFormat:#"%#%d.png",day,i]];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p<image.size.width*image.size.height*4;p+=4)
{
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
}
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(#"%# (%d) = %f",day,i,totalLuminance);
}
}
Here are the issues :
"Instance method '-rgbaPixels' not found (return type defaults to 'id')"
&
"Incompatible pointer types initializing 'unsigned char *' with an expression of type 'id'"
Thanks a lot ! =)
Rather than doing expensive CPU-bound processing of each pixel in an input video frame, let me suggest an alternative approach. My open source GPUImage framework has a luminosity extractor built into it, which uses GPU-based processing to give live luminosity readings from the video camera.
It's relatively easy to set this up. You simply need to allocate a GPUImageVideoCamera instance to represent the camera, allocate a GPUImageLuminosity filter, and add the latter as a target for the former. If you want to display the camera feed to the screen, create a GPUImageView instance and add that as another target for your GPUImageVideoCamera.
Your luminosity extractor will use a callback block to return luminosity values as they are calculated. This block is set up using code like the following:
[(GPUImageLuminosity *)filter setLuminosityProcessingFinishedBlock:^(CGFloat luminosity, CMTime frameTime) {
// Do something with the luminosity
}];
I describe the inner workings of this luminosity extraction in this answer, if you're curious. This extractor runs in ~6 ms for a 640x480 frame of video on an iPhone 4.
One thing you'll quickly find is that the average luminosity from the iPhone camera is almost always around 50% when automatic exposure is enabled. This means that you'll need to supplement your luminosity measurements with exposure values from the camera metadata to obtain any sort of meaningful brightness measurement.
Why do you place the camera image into an NSArray *dayArray? Five lines later you remove it from that array but treat the object as an NSString. An NSString does not have rgbaPixels. The example you copy-pasted has an array of filenames corresponding to pictures taken at different times of the day. It then opens those image files and performs the analysis of luminosity.
In your case, there is no file to read. Both outer for loops, i.e. on day and i will have to go away. You already got access to the Image provided through the UIImagePickerController. Right after adding the subview, you could in principle access pixels as in unsigned char *pixels = [Image rgbaPixels]; where Image is the image you got from UIImagePickerController.
However, this may not be what you want to do. I imagine that your goal is rather to show the UIImagePickerController in capture mode and then to measure luminosity continuously. To this end, you could turn Image into a member variable, and then access its pixels repeatedly from a timer callback.
You can import below class from GIT to resolve this issue.
https://github.com/maxmuermann/pxl
Add UIImage+Pixels.h & .m files into project. Now try to run.