Creating a UIButton in shape of the image given - ios

how can i create a UIButton in shape of the image given.
i am able to create something close to it,but the image seems to be not fitting in the button.
my requirement is the image given below.i.e;semicircle
the code i used to create the button is given below.
what changes should i make on the following image to get this button.
ps-add subview done in another class..
btnStartGame =[UIButton buttonWithType:UIButtonTypeCustom];
[btnStartGame setFrame:CGRectMake(60, 200, 200, 200)];
btnStartGame.titleLabel.font=[UIFont fontWithName:#"Helvetica-Bold" size:30];
[btnStartGame setImage:
[UIImage imageNamed:#"Draw button.png"] forState:UIControlStateNormal];
btnStartGame.titleLabel.textColor=[UIColor redColor];
btnStartGame.clipsToBounds = YES;
btnStartGame.layer.cornerRadius = 50;//half of the width
btnStartGame.layer.borderColor=[UIColor redColor].CGColor;
btnStartGame.layer.borderWidth=2.0f;
btnStartGame.tag=20;
btnStartGame.highlighted=NO;

There is a really great tutorial here with downloadable source code:
http://iphonedevelopment.blogspot.co.uk/2010/03/irregularly-shaped-uibuttons.html
In essence you need to create a category on UIImage, which checks whether the point you have touched is transparent image or not. This means you can use it to check hit tests on irregular shapes.
You then subclass UIButton and over-ride the hitTest:withEvent:
Hope this helps

Although the solution by Jeff Lamarche linked by the other answer works fine, it uses a lot of memory, and/or does a lot of work:
If you create NSData once in the initializer, you end up holding a relatively large block of memory for the lifetime of each button
If you make it transient, the way it is done in the answer, then you end up converting the entire image each time you hit test your button - processing thousands of pixels and throwing away the result after fetching a single byte!
It turns out that you can do this much more efficiently, both in terms of memory use and CPU consumption, by following the approach outlined in this answer.
Subclass UIButton, and override its pointInside:withEvent: method like this:
-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
if (![super pointInside:point withEvent:event]) {
return NO;
}
unsigned char pixel[1] = { 0 };
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1, NULL, (CGBitmapInfo)kCGImageAlphaOnly);
UIGraphicsPushContext(context);
UIImage *image = [self backgroundImageForState:UIControlStateNormal] ;
[image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
return pixel[0] != 0;
}
The code above takes alpha from the background image of the button. If your button uses another image, change the line that initializes image variable above.

Related

Set CALayer as SCNMaterial's diffuse contents

I've been searching all over the internet over the past couple of days to no avail. Unfortunately, the apple documentation about this specific issue is vague and no sample code is available (at least thats what I found out). What seems to be the issue you may ask...
I'm trying to set a uiview's layer as the contents of the material that is used to render an iPhone model's screen (Yep, trippy :P ). The iPhone's screen's UV mapping is set from 0 to 1 so that no issue persists in mapping the texture/layer onto the texels.
So, instead of getting this layer to appear rendered on the iPhone, same as left image, Instead, I get this rendered onto the iPhone like right image
Correct Render                                        Incorrect Render
Also note, that when I set a breakpoint and debug the actual iPhone node and view it in Xcode, a completely different render is shown and the layer gets half-fixed when I continue execution:
Now then... HOW do I fix this issue??? I've tried playing with the diffuse's contents transform matrix but nothing gets fixed. I've also tried resizing the UIView to 256x256 (since the UV seems to be 256x256 as shown in blender - the 3d modelling package), but that doesn't fix anything.
Here is the code for the layer:
UIView *screen = [[UIView alloc] initWithFrame:self.view.bounds];
screen.backgroundColor = [UIColor redColor];
UIView *temp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, screen.bounds.size.width, 60)];
temp.backgroundColor = [UIColor colorWithRed:0 green:(112.f/255.f) blue:(235.f/255.f) alpha:1];
UILabel *label = [[UILabel alloc] initWithFrame:CGRectInset(temp.bounds, 40, 0)];
label.frame = CGRectOffset(label.frame, 40, 0);
label.textColor = [UIColor colorWithRed:0 green:(48.f/255.f) blue:(84.f/255.f) alpha:1];
label.text = #"Select Track";
label.font = [UIFont fontWithName:#"HelveticaNeue-Light" size:30];
label.minimumScaleFactor = 0.001;
label.adjustsFontSizeToFitWidth = YES;
label.lineBreakMode = NSLineBreakByClipping;
[temp addSubview:label];
UIView *separator = [[UIView alloc] initWithFrame:CGRectMake(0, temp.bounds.size.height - 2, temp.bounds.size.width, 2)];
separator.backgroundColor = [UIColor colorWithRed:0 green:(48.f/255.f) blue:(84.f/255.f) alpha:1];
[temp addSubview:separator];
[screen addSubview:temp];
screen.layer.contentsGravity = kCAGravityCenter;
Edit
What's even weirder is that if I capture a UIImage of the view using:
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
and use that as the diffuse's content... everything works out perfectly fine?! It's really weird and frustrating since the image's size is exactly the same as the uiview's...
Edit 2
I ended up just using an image of the view as the texture, which makes things much more static than I needed. I won't set this as the answer because I'll still be waiting for a correct fix to this issue even if it in a long time. So, if you have an answer and this topic has been opened for a long time, please bump it if you can. The documentation on this section is just so poor.
New post on an old thread, but this day-in-age, it's possible to set the UIView itself as SCNMaterialProperty (diffuse) contents. Intention to support this feature is communicated directly from SceneKit engineering at Apple, though the documentation has not yet been updated to reflect it.
To tied back to the original post, do not set a UIView.layer as material property contents; instead set contents to the UIView itself.
[Update: according to Lance's comment below, support for views may be getting worse rather than getting better.]
The SceneKit docs pretty strongly suggest that, while there are cases where you can use animated CALayers as material content, that doesn't include UIView layers:
SceneKit cannot use a layer that is already being displayed elsewhere (for example, the backing layer of a UIView object).
That suggests that if you want to make animated content for your material, you're better off with either Core Animation used entirely on its own or SpriteKit.

Blur screen with iOS 7's snapshot API

I believe the NDA is down, so I can ask this question. I have a UIView subclass:
BlurView *blurredView = ((BlurView *)[self.view snapshotViewAfterScreenUpdates:NO]);
blurredView.frame = self.view.frame;
[self.view addSubview:blurredView];
It does its job so far in capturing the screen, but now I want to blur that view. How exactly do I go about this? From what I've read I need to capture the current contents of the view (context?!) and convert it to CIImage (no?) and then apply a CIGaussianBlur to it and draw it back on the view.
How exactly do I do that?
P.S. The view is not animated, so it should be OK performance wise.
EDIT: Here is what I have so far. The problem is that I can't capture the snapshot to a UIImage, I get a black screen. But if I add the view as a subview directly, I can see the snapshot is there.
// Snapshot
UIView *view = [self.view snapshotViewAfterScreenUpdates:NO];
// Convert to UIImage
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Apply the UIImage to a UIImageView
BlurView *blurredView = [[BlurView alloc] initWithFrame:CGRectMake(0, 0, 500, 500)];
[self.view addSubview:blurredView];
blurredView.imageView.image = img;
// Black screen -.-
BlurView.m:
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.imageView = [[UIImageView alloc] init];
self.imageView.frame = CGRectMake(20, 20, 200, 200);
[self addSubview:self.imageView];
}
return self;
}
Half of this question didn't get answered, so I thought it worth adding.
The problem with UIScreen's
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
and UIView's
- (UIView *)resizableSnapshotViewFromRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
withCapInsets:(UIEdgeInsets)capInsets
Is that you can't derive a UIImage from them - the 'black screen' problem.
In iOS7 Apple provides a third piece of API for extracting UIImages, a method on UIView
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
It is not as fast as snapshotView, but not bad compared to renderInContext (in the example provided by Apple it is five times faster than renderInContext and three times slower than snapshotView)
Example use:
UIGraphicsBeginImageContextWithOptions(image.size, NULL, 0);
[view drawViewHierarchyInRect:rect];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then to get a blurred version
UIImage* lightImage = [newImage applyLightEffect];
where applyLightEffect is one of those Blur category methods on Apple's UIImage+ImageEffects category mentioned in the accepted answer (the enticing link to this code sample in the accepted answer doesn't work, but this one will get you to the right page: the file you want is iOS_UIImageEffects).
The main reference is to WWDC2013 session 226, Implementing Engaging UI on iOS
By the way, there is an intriguing note in Apple's reference docs for renderInContext that hints at the black screen problem..
Important: The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
The note hasn't been updated since 10.5, so I guess 'future versions' may still be a while off, and we can add our new CASnapshotLayer (or whatever) to the list.
Sample Code from WWDC ios_uiimageeffects
There is a UIImage category named UIImage+ImageEffects
Here is its API:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius
tintColor:(UIColor *)tintColor
saturationDeltaFactor:(CGFloat)saturationDeltaFactor
maskImage:(UIImage *)maskImage;
For legal reason I can't show the implementation here, there is a demo project in it. should be pretty easy to get start with.
To summarize how to do this with foundry's sample code, use the following:
I wanted to blur the entire screen just slightly so for my purposes so I'll use the main screen bounds.
CGRect screenCaptureRect = [UIScreen mainScreen].bounds;
UIView *viewWhereYouWantToScreenCapture = [[UIApplication sharedApplication] keyWindow];
//screen capture code
UIGraphicsBeginImageContextWithOptions(screenCaptureRect.size, NO, [UIScreen mainScreen].scale);
[viewWhereYouWantToScreenCapture drawViewHierarchyInRect:screenCaptureRect afterScreenUpdates:NO];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//blur code
UIColor *tintColor = [UIColor colorWithWhite:1.0 alpha:0];
UIImage *blurredImage = [capturedImage applyBlurWithRadius:1.5 tintColor:tintColor saturationDeltaFactor:1.2 maskImage:nil];
//or use [capturedImage applyLightAffect] but I thought that was too much for me
//use blurredImage in whatever way you so desire!
Notes on the screen capture part
UIGraphicsBeginImageContextWithOptions() 2nd argument is opacity. It should be NO unless you have nothing with any alpha other than 1. If you return yes the screen capture will not look at transparency values so it will go faster but will probably be wrong.
UIGraphicsBeginImageContextWithOptions() 3rd argument is the scale. Probably want to put in the scale of the device like I did to make sure and differentiate between retina and non-retina. But I haven't really tested this and I think 0.0f also works.
drawViewHierarchyInRect:afterScreenUpdates: watch out what you return for the screen updates BOOL. I tried to do this right before backgrounding and if I didn't put NO the app would go crazy with glitches when I returned to the foreground. You might be able to get away with YES though if you're not leaving the app.
Notes on blurring
I have a very light blur here. Changing the blurRadius will make it blurrier, and you can change the tint color and alpha to make all sorts of other effects.
Also you need to add a category for the blur methods to work...
How to add the UIImage+ImageEffects category
You need to download the category UIImage+ImageEffects for the blur to work. Download it here after logging in: https://developer.apple.com/downloads/index.action?name=WWDC%202013
Search for "UIImageEffects" and you'll find it. Just pull out the 2 necessary files and add them to your project. UIImage+ImageEffects.h and UIImage+ImageEffects.m.
Also, I had to Enable Modules in my build settings because I had a project that wasn't created with xCode 5. To do this go to your target build settings and search for "modules" and make sure that "Enable Modules" and "Link Frameworks Automatically" are both set to yes or you'll have compiler errors with the new category.
Good luck blurring!
Check WWDC 2013 sample application "running with a snap".
The blurring is there implemented as a category.

How to set the imageView property in a UITableCellView to show only a background color?

I have a custom class inheriting from UITableViewCell class that shows either an image (left to the title) or a generic dark-colored square if the image is not available). The following code shows a dark square on a light-colored cell background:
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(11, 6, 40, 40)];
[imageView setBackgroundColor:kBackgroundGreyColour];
[cell.contentView addSubview:imageView];
However, instead of creating a custom subview in each table cell I would rather like to use the existing imageView property of the generic UITableViewCell class and modify it somehow to show the square as the code above does. This is what I am trying at this moment:
UIImageView* iv = [[UIImageView alloc] initWithFrame:CGRectMake(11, 6, 40, 40)];
[iv setBackgroundColor:[UIColor redColor]];
self.imageView.hidden = NO;
self.imageView.opaque = iv.opaque;
self.imageView.alpha = iv.alpha;
self.imageView.image = iv.image;
[self bringSubviewToFront:self.imageView];
[self.imageView setBackgroundColor:[UIColor redColor]];
I added all those lines to set as many of the existing UIImageView properties to the same values as the created UIImageView instance in the first code snippet, and yet the second code snippet doesn't show any dark square. It just doesn't show anything at all and the cell looks like there is just the light background and no image view visible. But I see that the imageView property is not nil so executing all those lines of code in the second snippet should show something?
However, as soon as I assign a new image to the imageView property (e.g. self.imageView.image = [[UIImage alloc] init...], the square shows the assigned image without problems.
Edit: Just a note that in the second case I am setting the frame of the imageView in layoutSubview function, e.g.:
-(void)layoutSubviews
{
[super layoutSubviews];
self.imageView.frame = CGRectMake(11, 6, 40, 40);
}
So my questions are:
1. Which properties of the existing imageView property I would need to set and to what values so that the code will show a square filled with a specific color (like the first snippet of code does)?
Is there a way of creating the UIImage programatically so that it shows only a background color without any image associated with it (and which I could use to set the imageView.image property to show that color).
Is it possible to replace the existing imageView property in a UITableViewCell class with a custom view without adding a custom subview (like the first code snippet did), so that I can show a placeholder UIView with a background color when the image is not available?
The reason why your code doesn't work, is as you guessed; Because when you set the background colour of an imageview, it doesn't create anything on the image property.
And, you've figured out that you can't directly set the imageview property of the cell either.
I'd say your best bet, is the former option; To create a UIImage programmatically.
Although, I'd highly suggest simply creating one in your favourite image editing software then including it in the bundle. It makes for easy replacement later, for when you may get a better image, and next to no code and effort required to replace.
But if you still wish to do it all programmatically, it's not as simple as you'd hope.
CGRect rect = CGRectMake(11, 6, 40, 40);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [kBackgroundGreyColour CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageview.image = image;
Should do the trick.
This defines the image size, creates a graphics context (think of it as a canvas), picks your grey colour to use, paints the canvas with it, then scans it into your computer into the small little size you wanted.
The little green imp does it all behind the screen (Sorry, too much Terry Pratchett).

iPhone Custom UISlider preload and rounded edges issues

I'm trying to implement a custom UISlider, I've extended it with a class called UISliderCustom which has the following code:
#implementation UISliderCustom
- (id)initWithCoder:(NSCoder *)aDecoder{
if(self == [super initWithCoder:aDecoder]){
self.frame = CGRectMake(self.frame.origin.x, self.frame.origin.y, 200, 13);
UIImage *slideMin = [[UIImage imageNamed:#"slideMinimum.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 5, 0, 0)];
UIImage *slideMax = [[UIImage imageNamed:#"slideMaximum.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 5, 0, 0)];
[self setThumbImage:[UIImage imageNamed:#"slideThumb.png"] forState:UIControlStateNormal];
[self setThumbImage:[UIImage imageNamed:#"slideThumb.png"] forState:UIControlStateHighlighted];
[self setMinimumTrackImage:slideMin forState:UIControlStateNormal];
[self setMaximumTrackImage:slideMax forState:UIControlStateNormal];
}
return self;
}
#end
I ran into two small problems
When I slide over the slider to one of the edges (progress = 0.0 / progress = 1.0), I can clearly see "left overs" in the sides, im not sure how to handle that as well, unfortunately :)
Slider images:
Problem:
I see the regular UISlider (blue and silver) for a couple of seconds, and only then the custom graphics is loaded, or when i actually move the slider. I'm not sure why this is happening.. EDIT: This only happens in the simulator, works fine now.
Thanks in advance for any assistance :)
Shai.
You have no need to subclass UISlider to achieve this effect, and if you did you certainly wouldn't set the track images in the drawRect method. drawRect should contain drawing code only, it is called whenever any part of the control needs redrawing.
Set the thumb and track images in a separate method, either within your subclass (called from initWithFrame and initWithCoder) or in the object that creates the slider in the first place. This only needs to be done once, when the slider is first created. Don't override drawRect.
You don't need to call awakeFromNib manually either, unless you have some specific code in there as well? That would be a common place to set custom images in a subclass, if you only ever used the slider from IB.
For the square ends, the problem is that the extreme edge of your track image is square, so it is showing around the thumb. Make both ends of the track image rounded, with a 1px stretchable area in the middle, like this:
I just had a very similar problem myself. It turned out that the size (width x height) of the slider that I added in interface builder didn't match the sizes of the images I was using to customize the slider. Once I made them match, those "leftovers" at the ends of the slider went away.

new created UIImageViews do not show up when added as subview

upon a tap, I want to let a few UIImageView objects appear on an underlaying UIView.
So I created the following action:
- (IBAction)startStars: (id) sender
{
UIView* that = (UIView*) sender; // an UIButton, invisible, but reacting to Touch Down events
int tag = that.tag;
UIView* page = [self getViewByTag:mPages index:tag]; // getting a page currently being viewed
if ( page != nil )
{
int i;
UIImage* image = [UIImage imageNamed:#"some.png"];
for ( i = 0 ; i < 10 ; ++i )
{
// create a new UIImageView at the position of the tapped UIView
UIImageView* zap = [[UIImageView alloc] initWithFrame:that.frame];
// assigning the UIImage
zap.image = image;
// make a modification to the position so we can see all instances
CGPoint start = that.layer.frame.origin;
start.x += i*20;
zap.layer.position = start;
[page addSubview:zap]; // add to the UIView
[zap setNeedsDisplay]; // please please redraw
[zap release]; // release, since page will retain zap
}
[image release];
}
}
Unfortunately, nothing shows up. The code gets called, the objects created, the image is loaded, even the properties are as expected.
Page itself is a real basic UIView, created with interface builder to contain other views and controls.
Still, nothing of this can be seen....
Has anyone an idea what I am doing wrong? Do I need to set the alpha property (or others)?
Thanks
Zuppa
A couple of things I would check:
Logging to make sure this section of code is actually being called.
Use the designated initializer for UIImageView:
[[UIImageView alloc] initWithImage:image];
This will ensure that your new view has the correct size for the image - what are the relative sizes of zap and your image? It could be out of the frame.
Ensure that the image actually exists and is created
Try not adjusting the layer properties, but setting the center of the image view instead. In fact, in the first instance don't adjust it at all and just see if you can see anything. I'm not sure but I think position might be moving the layer within the view so could be moving your image out of sight. Try:
CGPoint newCenter = that.frame.origin;
newCenter.y += zap.bounds.size.height/2;
newCenter.x += i*20;
zap.center = origin;
setNeedsDisplay is not required. Obviously from your comments it was an act of desparation!
Is your UIImage nil?
You don't need to have the .png extension for imageNamed: to work. It might not even work correctly if you put it in, I'm not sure. You're also overreleasing your image. imageNamed: returns an autoreleased image so there is no reason to call release on the image unless you're also calling retain on it somewhere.
I see a potential error: on second invocation you will re-add another copy, so take care about removing subviews before adding anew copy. Yu can alternatively move a previous view.
99% of my cases about non-visible views with images are wrong names, so as suggested, load using a temp var:
UIImage img = [UIImage imageNamed..];
and test img.

Resources