I have a CALayer and I want to add to it a stretchable image. If I just do:
_layer.contents = (id)[[UIImage imageNamed:#"grayTrim.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0.0, 15.0, 0.0, 15.0)].CGImage;
it won't work since the layers' default contentGravity is kCAGravityResize.
I've read that this could be accomplished using the contentsCenter but I cannot seem to figure out how exactly would I use that to achieve the stretched image in my CALayer.
Any ideas are welcome!
Horatiu
The response to this question is this one. Lets say you have a stretchable image which stretches only in width and has the height fixed (for simplicity sake).
The image is 31px width ( 15px fixed size - doesn't stretch -, 1px will be stretched)
Assuming your layer is a CALayer subclass your init method should look like this:
- (id)init
{
self = [super init];
if (self) {
UIImage *stretchableImage = (id)[UIImage imageNamed:#"stretchableImage.png"];
self.contents = (id)stretchableImage.CGImage;
self.contentsScale = [UIScreen mainScreen].scale; //<-needed for the retina display, otherwise our image will not be scaled properly
self.contentsCenter = CGRectMake(15.0/stretchableImage.size.width,0.0/stretchableImage.size.height,1.0/stretchableImage.size.width,0.0/stretchableImage.size.height);
}
return self;
}
as per documentation the contentsCenter rectangle must have values between 0-1.
Defaults to the unit rectangle (0.0,0.0) (1.0,1.0) resulting in the entire image being scaled. If the rectangle extends outside the unit rectangle the result is undefined.
This is it. Hopefully someone else will find this useful and it will save some development time.
Related
I have a UIImageView in which I have a UIImage obviously. I want to create a shadow effect only on the UIImage. My problem is that I cannot get the CGRect of the UIImage inside the UIImageView so I can apply the shadow effect on it by using the following method.
[mImageView.layer.shadowColor = [UIColor grayColor].CGColor;
mImageView.layer.shadowOffset = CGSizeMake(0.0f, 0.0f);
mImageView.layer.shadowOpacity = 0.9f;
mImageView.layer.masksToBounds = NO;
CGRect imageFrame = mImageView.frame;
UIEdgeInsets shadowInsets = UIEdgeInsetsMake(0, 0, -1.5f, 0);
UIBezierPath *shadowPath = [UIBezierPath bezierPathWithRect:UIEdgeInsetsInsetRect(imageFrame, shadowInsets)];
mImageView.layer.shadowPath = shadowPath.CGPath;
Please consider the image attached for this problem.
The problem is critical too because the UIImage can be an image of a rigid dimension because it is a cropped image as you can see in the picture attached.
The UIImageView’s bound is equal to the view’s bound here. So when applying the effect using the method above, it creates a UIBezierPath on the whole UIImageView, not only to the UIImage. As in the method, I cannot get the exact CGRect of the UIImage.
Any solution? What am I missing?
cropped image
UIImage is always rectangular, so is UIImageView. I believe you want to put shadow only around the jagged border of the cropped area right? If that is the case, you cannot use this method. You need to use CoreGraphics or others, to get the effect you want. For example, you can create a copy of this image in memory, blackened it, and blur it and paste it behind your image to create a shadowy effect.
My app sends a GET request to google to attain certain user information. One piece of crucial returned data is a users picture which is placed inside a UIImageView that is always exactly (100, 100) then redrawn to create a round mask for this imageView. These pictures come from different sources and thus always have different aspect ratios. Some have a smaller width compared to their height, sometimes it's vice-versa. This results in the image looking compressed. I've tried things such as the following (none of them worked):
_personImage.layer.masksToBounds = YES;
_personImage.layer.borderWidth = 0;
_personImage.contentMode = UIViewContentModeScaleAspectFit;
_personImage.clipsToBounds = YES;
Here is the code I use to redraw my images (it was attained from user fnc12 as the third answer in Making a UIImage to a circle form):
/** Returns a redrawn image that had a circular mask created for the inputted image. */
-(UIImage *)roundedRectImageFromImage:(UIImage *)image size:(CGSize)imageSize withCornerRadius:(float)cornerRadius
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0); //<== Notice 0.0 as third scale parameter. It is important because default draw scale ≠ 1.0. Try 1.0 - it will draw an ugly image...
CGRect bounds = (CGRect){CGPointZero, imageSize};
[[UIBezierPath bezierPathWithRoundedRect:bounds cornerRadius:cornerRadius] addClip];
[image drawInRect:bounds];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
This method is always called like so:
[_personImage setImage:[self roundedRectImageFromImage:image size:CGSizeMake(_personImage.frame.size.width, _personImage.frame.size.height) withCornerRadius:_personImage.frame.size.width/2]];
So I end up having a perfectly round image but the image it self isn't right aspect-wise. Please help.
P.S. Here's how images look when their width is roughly 70% that of their height before the redrawing of the image to create a round mask:
Hello dear friend there!
Here is my version that works:
Code in ViewController:
[self.profilePhotoImageView setContentMode:UIViewContentModeCenter];
[self.profilePhotoImageView setContentMode:UIViewContentModeScaleAspectFill];
[CALayer roundView:self.profilePhotoImageView];
roundView function in My CALayer+Additions class:
+(void)roundView:(UIView*)view{
CALayer *viewLayer = view.layer;
[viewLayer setCornerRadius:view.frame.size.width/2];
[viewLayer setBorderWidth:0];
[viewLayer setMasksToBounds:YES];
}
May be you should try to change your way to create rounded ImageView using my version that create rounded ImageView by modifying ImageView's view layer . Hope it helps.
To maintain aspect ratio of UIImageView, after setting image use following line of code.
[_personImage setContentMode:UIViewContentModeScaleAspectFill];
For detailed description follow reference link:
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImageView_Class/
I start out with an imageView.image (a photo).
I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
Remote service returns JSON of CGRect's for each detected face on the image.
I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height}. <-- my thinking, a frame equivalent to the size of the imageView.image
Add my UIView as a subview of self.imageView OR self.view (tried both)
End Result:
Rectangles are drawn but they do not appear correctly on the imageView.image. That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.
I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels. Yet, aren't they being treated as points when drawing my path?
Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image's frame so that the view matches an identical coordinate space as the submitted image?
Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.
I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!
EDIT 1
I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated
- (void)buildFaceRects {
// build an array of CGRect dicts off of JSON returned from analized image
NSMutableArray *array = [self analizeImage:self.imageView.image];
// enumerate over array using block - each obj in array represents one face
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
// build dictionary of rects and attributes for the face
NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[#"attributes"], #"attributes", obj[#"faceId"], #"faceId", obj[#"faceRectangle"], #"faceRectangle", nil];
// initiate face model object with dictionary
ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
NSLog(#"%#", face.faceId);
NSLog(#"%d", face.age);
NSLog(#"%#", face.gender);
NSLog(#"%f", face.faceRect.origin.x);
NSLog(#"%f", face.faceRect.origin.y);
NSLog(#"%f", face.faceRect.size.height);
NSLog(#"%f", face.faceRect.size.width);
// define frame for subview containing face rectangle
CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
// add view as subview of imageview (?)
[self.imageView addSubview:faceRect];
}];
}
EDIT 2:
/* Image info */
UIImageView *iv = self.imageView;
UIImage *img = iv.image;
CGImageRef CGimg = img.CGImage;
// Bitmap dimensions [pixels]
NSUInteger imgWidth = CGImageGetWidth(CGimg);
NSUInteger imgHeight = CGImageGetHeight(CGimg);
NSLog(#"Image dimensions: %lux%lu", imgWidth, imgHeight);
// Image size pixels (size * scale)
CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
NSLog(#"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
// Image size points
CGSize imgSizeInPoints = img.size;
NSLog(#"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
// Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
// add view as subview of image view
[iv addSubview:faceRect];
}];
We've got several problems :
Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.
The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.
Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.
Solution :
I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.
Convert pixels into points. That's easy : use ratio.
Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.
If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called.
Or, that's better, use constraints to set your frame in the UIImageView.
I hope to be clear enough.
Some links :
iOS Drawing Concepts
Displayed Image Frame In UIImageView
I was going through the SpriteKit documentation by Apple and came across a really useful feature that I could use when programming my UI. The problem is I can't get it to work.
Please see this page and scroll down to "Resizing a Sprite" - Apple Docs
I have literally copied the image dimensions and used the same code incase I was doing something wrong. But I always end up with a stretched looking image rather than the correct "end caps" staying the same scale.
I am referring to this code:
SKSpriteNode *button = [SKSpriteNode spriteWithImageNamed:#"stretchable_button.png"];
button.centerRect = CGRectMake(12.0/28.0,12.0/28.0,4.0/28.0,4.0/28.0);
What am I doing wrong? Is there a step I have missed?
EDIT:
Here is the code I have been using. I stripped it of my button class and tried to use it with an SKSPriteNode but still the problem persists. I also changed the image just to make sure it wasnt that. The image im using is a 32x32 at normal size.
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
[self addChild:button];
button.position = ccp(200, 200);
button.size = CGSizeMake(128, 64);
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);
The .centerRect property works as documented if you adjust the sprites .scale property.
Try:
SKTexture *texture = [SKTexture textureWithImageNamed:#"Button.png"];
SKSpriteNode *button = [[SKSpriteNode alloc] initWithTexture:texture];
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);
[self addChild:button];
button.xScale = 128.0/texture.size.width;
button.yScale = 64.0/texture.size.height;
9/32 is integer division, so the result passed to CGRectMake is zero. Ditto the other three parameters. If you use floating point literals like the example you cite, you might get better results.
Here's a refresh of how exactly this works. By the way, my image size width is 48 pixels and height is 52 pixels, but this doesn't matter at all. Any image can be used:
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
//(x, y, width, height). First two values are the four corners of the image that you DON'T want touched/resized (They will just be moved).
//The second two values represent how much of images width & height you want cut out & used as stretching material. Cut out happens from the center of the image.
button.centerRect = CGRectMake(20/button1.frame.size.width, 20/button1.frame.size.height, 5/button1.frame.size.width, 15/button1.frame.size.height);
button.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2); //Positions sprite in the middle of the screen.
button.xScale = 4; //Resizes width (This is all I needed).
//button.yScale = 2; //Resizes height (Commented out because I didn't need this. You can uncomment if the button needs to be higher).
[self addChild:button];
Read the section called "Resizing a Sprite" in this document: https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Sprites/Sprites.html#//apple_ref/doc/uid/TP40013043-CH9-SW10
'Figure 2-4 A stretchable button texture' demonstrates how the (x, y, width, height) works.
Based on rwr's answer here is a working init method for a SKSpriteNode. I use this in my own game. Basically you make insets of 10px all around the the output image. And then call it like this:
[[HudBoxScalable alloc] initWithTexture:[atlas textureNamed:#"hud_box_9grid.png"] inset:10 size:CGSizeMake(300, 100) delegate:(id<HudBoxDelegate>)clickedObject];
-(id) initWithTexture:(SKTexture *)texture inset:(float)inset size:(CGSize)size {
if (self=[super initWithTexture:texture]) {
self.centerRect = CGRectMake(inset/texture.size.width,inset/texture.size.height,(texture.size.width-inset*2)/texture.size.width,(texture.size.height-inset*2)/texture.size.height);
self.xScale = size.width/texture.size.width;
self.yScale = size.height/texture.size.height;
}
return self;
}
How can I implement the image below pragmatically - meaning the digits can change at runtime or even be replaced with a movie?
Just add a blurred UIView on top of your thing.
For example...make a UIImage of your desired view size, blur it using CIFilter and then add it to your view .It should achieve the desired effect.
This is generally the same question and is answered by quite a few methods.. Anyway I would propose 1 more:
Get the image from UIView
+ (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
rather yet play around a bit with this to get the desired part of the view as the image. Now create a new view and add to it image views (with the image you get from layer). Then move the centers of the image views to achieve gaussian algorithm and take the image from this layer again and place it back on the original view.
Moving the center should be defined by radius fragment (I'd start with .5f) and resample range.
for(int i=1; i<resampleCount; i++) {
view1.center = CGPointMake(view1.center.x + radiusFragment*i, view1.center.y);
view2.center = CGPointMake(view2.center.x - radiusFragment*i, view2.center.y);
view3.center = CGPointMake(view3.center.x, view3.center.y + radiusFragment*i);
view4.center = CGPointMake(view4.center.x, view4.center.y - radiusFragment*i);
//add the subviews
}
//get the image from view
All the subviews need to have alpha set to 1.0f/(resampleCount*4)
This method might not be the fastest but it would be extremely easy to implement and if you can pimp the radius and resample range to minimum fragments it should do pretty well.
use a UIView whith white background and decrease the alpha property
blurView.backgroundColor=[UIColor colorWithRed:255 green:255 blue:255 alpha:0.3]