iOS. Complex stretching? - ios

I want to stretch an image using 2 stretch areas. So I need to achieve something like this:
But by default in iOS I can define one rect only.
Is it possible to solve this issue without incision into 2 separate images when each of them has only one rect?

As stated, I would definitely do with 2 images. Or add a category on top of UIImage which does your job. The key is what kind of parameter will you give to the method?

The only thing which iOS provides out of the box is (as described in this post)
// Image with cap insets
UIImage *image = [[UIImage imageNamed:#"image"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 16, 0, 16)];
There is no way to do what you are referring to without splitting the image, or writing a custom image rendering UIView subclass. You should be careful if going with the later as you will be throwing away a lot of optimisations present in UIImageView.

Related

How can I tile the background with UIImageViews with code efficiently?

I'm working in Xcode 6 on tiling the iPhone background with many UIImageViews and I'd like to know if this is the most efficient solution.
I know one simple solution would be to create image views in the storyboard and cover the entire screen with them manually. I'd like to do it with code. Here's the code I have currently (5x5 is an okay size since I can scale it up or down to fill the screen with bigger or larger images):
CGRect tiles[5][5];
UIImage *tileImages[5][5];
UIImageView *tileViews[5][5];
for(int i=0;i<5;i++)
{
for(int j=0;j<5;j++)
{
tiles[i][j] = CGRectMake(50*i,50*j,50,50);
tileImages[i][j] = [UIImage imageNamed:#"tile.png"];
tileViews[i][j] = [[UIImageView alloc] initWithFrame:tiles[i][j]];
tileViews[i][j].image = tileImages[i][j];
[self.view addSubview:tileViews[i][j]];
}
}
Currently all the images are the same, but in the long haul I'm going to make them dependent on various factors.
I have read around and I know that UIImageViews are finicky. Is this the proper and memory efficient way to tile a background with UIImageViews? Is there a better way to do this? Can I manually go in after the tiles are initialized and change an image it's displaying and have it update in real time with just this?
tileView[1][2].image = [UIImage imageNamed:#"anotherTile.png"];
Thanks in advance, I just finished a basic 6-week course in IOS programming at my college so I still find myself trying to appease the Objective C Gods occasionally.
I guess my doubt would be why you need them to be image views. Drawing an image in a view or layer is so easy, and arranging views or layers in a grid is so easy; what are the image views for, when you are not really using or needing any of the power of image views?
I have several apps that display tiled images - one shows 99 bottles in a grid, one shows a grid of tile "pieces" that the user taps in matched pairs to dismiss them, one shows a grid of rectangular puzzle pieces that the user slides to swap them and get them into the right order, and one shows a grid of "cards" that the user taps in triplets to match them - and in none of those cases do I use image views. In one, they are CALayers; in the other cases they are custom UIView subclasses; but in no case do I use UIImageViews.
For something as simple as a grid of images, using UIImageViews seems, as you seem to imply, overkill. If the reason you have resorted to UIImageViews is that you don't know how to make a UIView or a CALayer draw its content, I'd say, stop and learn how to do that before you go any further.

Set blendmodes on UIImageViews like Photoshop

I've been trying to apply blend modes to my UIImageViews to replicate a PSD mock up file (sorry can't provide). The PSD file has 3 layers, a base color with 60% normal blend, an image layer with 55% multiply blend and a gradient layer with 35% overlay.
I've been trying several tutorials over the internet but still could not get the colors/image to be exactly the same.
One thing I noticed is that the color of my iPhone is different from my Mac's screen.
I found the documentation for Quartz 2D which I think is the right way to go, but I could not get any sample/tutorial about Using Blend Modes with Images.
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBIJEFG
Can anyone provide a good tutorial that does the same as the one in the documentation so I could atleast try to mix things up should nobody provide me a straight forward answer to my question.
This question was asked ages ago, but if someone is looking for the same anwser, you can user compositingFilter on the backing layer of your view to get what you want:
overlayView.layer.compositingFilter = "multiplyBlendMode"
Suggested by #SeanA, here: https://stackoverflow.com/a/46676507/1147286
Filters
Complete list of filters is here
Or you can print out the compositing filter types with
print("Filters:\n",CIFilter.filterNames(inCategory: kCICategoryCompositeOperation))
Now built-in to iOS. Historic answer:
You don't want to use UIImageView for this since it doesn't really support blend modes.
Instead, the way to go would be to create a UIView subclass (which will act just like UIImageView). So make a UIView subclass called something like BlendedImageView. It should have an image property, and then in the drawRect: method you can do this:
- (void)drawRect:(CGRect)rect
{
//
// Here you need to calculate a rect based on self.contentMode to replicate UIImageView-behaviour.
// For example, for UIViewContentModeCenter, you want a rect that goes outside 'rect' and centers with it.
//
CGRect actualRect = // ...
[self.image drawInRect:actualRect blendMode:kCGBlendModeOverlay alpha:0.5];
}
Replace alpha and blend mode to fit your preference. Good practice would be to let your BlendedImageView save a blendMode property.
It may be that in your case, you need to do a bit more advanced view that draws all 3 of your images, using the same code as above.

Center-aligned masked UIImageView / Imitating Twitter & Facebook app's image views

Post title can be a little bit weird but here's what I'm looking forward to do:
Look at the image in the middle, it's displaying only a part of the original image. It's still high definition, not really cropped but only masked with the center of the image at the center of the view.
So basically they put a bigger image behind a smaller view (I did that for having circular imageviews in the past). But how can I achieve that particularly?
Is there any cocoapods or something to do so or should I get started doing it myself? Any suggestions on how to code-wisely build this?
The main goal here is to keep a static space to display images so they're always the same width/height. Doing this effect seems like a good way of achieving this.
EDIT: Here's a little sketch of an idea I just had to mimic that behavior:
Thanks a lot and have a nice day.
If you're asking what I think you're asking, you don't have to look far to find this functionality. Use UIView's built in contentMode property, specifically in this case, UIViewContentModeScaleAspectFill.
[imageView setContentMode:UIViewContentModeScaleAspectFill];
Then to crop of the parts of the image extending out of the frame, be sure to use clipsToBounds:
[imageView setClipsToBounds:YES];
Here is another solution, but make sure your image width and height is greater than the imageview,
imageView.contentMode=UIViewContentModeCenter;
imageView.clipsToBounds=YES;
imageView.clearsContextBeforeDrawing=NO;

Draw image in UIView using CoreGraphics and draw text on it outside drawRect

I have a custom UITableViewCell subclass which shows and image and a text over it.
The image is downloaded while the text is readily available at the time the table view cell is displayed.
From various places, I read that it is better to just have one view and draw stuff in the view's drawRect method to improve performance as compared to have multiple subviews (in this case a UIImageView view and 2 UILabel views)
I don't want to draw the image in the custom table view cell's drawRect because
the image will probably not be available the first time its called,
I don't want to draw the whole image everytime someone calls drawRect.
The image in the view should only be done when someone asks for the image to be displayed (example when the network operation completes and image is available to be rendered). The text however is drawn in the -drawRect method.
The problems:
I am not able to show the image on the screen once it is downloaded.
The code I am using currently is-
- (void)drawImageInView
{
//.. completion block after downloading from network
if (image) { // Image downloaded from the network
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 1.0);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGPoint posOnScreen = self.center;
CGContextDrawImage(context, CGRectMake(posOnScreen.x - image.size.width/2,
posOnScreen.y - image.size.height/2,
image.size.width,
image.size.height),
image .CGImage);
UIGraphicsEndImageContext();
}
}
I have also tried:
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIGraphicsEndImageContext();
to no avail.
How can I make sure the text is drawn on the on top of the image when it is rendered. Should calling [self setNeedsDisplay] after UIGraphicsEndImageContext(); be enough to
ensure that the text is rendered on top of the image?
You're right on the fact that drawing text will make your application faster as there's no UILabel object overhead, but UIImageViews are highly optimized and you won't probably ever be able to draw images faster than this class. Therefore I highly recommend you do use UIImageViews to draw your images. Don't fall in the optimization pitfall: only optimize when you see that your application is not performing at it's max.
Once the image is downloaded, just set the imageView's image property to your image and you'll be done.
Notice that the stackoverflow page you linked to is almost four years old, and that question links to articles that are almost five years old. When those articles were written in 2008, the current device was an iPhone 3G, which was much slower (both CPU and GPU) and had much less RAM than the current devices in 2013. So the advice you read there isn't necessarily relevant today.
Anyway, don't worry about performance until you've measured it (presumably with the Time Profiler instrument) and found a problem. Just implement your interface in the simplest, most maintainable way you can. Then, if you find a problem, try something more complicated to fix it.
So: just use a UIImageView to display your image, and a UILabel to display your text. Then test to see if it's too slow.
If your testing shows that it's too slow, profile it. If you can't figure out how to profile it, or how to interpret the profiler output, or how to fix the problem, then come back and post a question, and include the profiler output.

Coreplot - stretchable Images

I was not able to find any example nor any other question regarding stretchable images in CorePlot. I'm trying to use an annotation in my graph whose image must be a stretchable one (the corners of the image must be untouched).
I set the fill of the annotation with fillWithImage: and I think I'm right on that spot, but the image is resizing itself completely, stretching the corners as well.
I tried all the combinations with UIImage and CPTImage that I know of or have seen. But to no avail.
One example is:
UIImage *annotationImage = [[UIImage imageNamed:#"image.png"] stretchableImageWithLeftCapWidth:13 topCapHeight:12];
CPTFill *annotationFill = [CPTFill fillWithImage:[CPTImage imageWithCGImage:annotationImage.CGImage]];
borderLayer.fill = annotationFill;
Please tell me this is possible and I'm missing something.
Thanks in advance
This is not supported in the current version of Core Plot. Please open an enhancement request on the Core Plot issue tracker.

Resources