I have three buttons on my screen which have background images set in storyboard, the background images are hexagonal shapes. I'm currently playing around with gravity, when a button is pressed I want them all to fall to the bottom of the screen. I would like the buttons to react like they are hexagonal shapes when bouncing off each other rather than the rectangular shapes they are.
Is there a way to clip the UIButtons.frame to the hexagonal.png?
- (IBAction)home:(id)sender {
animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view];
gravity = [[UIGravityBehavior alloc] initWithItems:#[self.youtuberLyr, self.gameLyr, self.homeLyr]];
[animator addBehavior:gravity];
collision = [[UICollisionBehavior alloc] initWithItems:#[self.youtuberLyr, self.gameLyr, self.homeLyr]];
collision.translatesReferenceBoundsIntoBoundary = YES;
[animator addBehavior:collision];
barrier = [[UIView alloc] initWithFrame:CGRectMake(0, 1024, 768, 0)];
barrier.backgroundColor = [UIColor redColor];
[self.view addSubview:barrier];
CGPoint rightEdge = CGPointMake(barrier.frame.origin.x + barrier.frame.size.width, barrier.frame.origin.y);
[collision addBoundaryWithIdentifier:#"barrier" fromPoint:barrier.frame.origin toPoint:rightEdge];
}
I have tried googling to no avail. Any help is greatly appreciated.
UIKit Dynamics only supports rectangle shapes as defined by this protocol.
#protocol UIDynamicItem <NSObject>
#property (nonatomic, readwrite) CGPoint center;
#property (nonatomic, readonly) CGRect bounds;
#property (nonatomic, readwrite) CGAffineTransform transform;
#end
Maybe SpriteKit's SKPhysicsBody would work for your case. Specifically, by passing a hexagon path to this initializer.
+ (SKPhysicsBody *)bodyWithPolygonFromPath:(CGPathRef)path
Related
I'm trying to create a view controller to mix two UIImages. One of them will be static, actin as background, the other can be scaled, dragged and rotated to place it where I want inside the first one.
I created a test view Controller like this:
Both blue background (_backImageView) and Mario image (_marioImageView) are UIImageViews at the same level (no one is child of the other). I handle all the gesture recognizers like this:
#interface ViewController () <UIGestureRecognizerDelegate>
#property (strong, nonatomic) IBOutlet UIPanGestureRecognizer *pan;
#property (strong, nonatomic) IBOutlet UIPinchGestureRecognizer *pinch;
#property (strong, nonatomic) IBOutlet UIRotationGestureRecognizer *rotation;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
_pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(move:)];
[_marioImageView addGestureRecognizer:_pan];
_pinch = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(resize:)];
[_marioImageView addGestureRecognizer:_pinch];
_rotation = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotate:)];
[_marioImageView addGestureRecognizer:_rotation];
}
- (void)move:(UIPanGestureRecognizer *)pan
{
CGPoint translation = [pan translationInView:_backImageView];
CGPoint newCenter = CGPointMake(pan.view.center.x + translation.x,
pan.view.center.y + translation.y);
CGPoint newBottomRight = CGPointMake(newCenter.x + pan.view.frame.size.width / 2,
newCenter.y + pan.view.frame.size.height / 2);
CGPoint newOrigin = CGPointMake(newCenter.x - pan.view.frame.size.width / 2,
newCenter.y - pan.view.frame.size.height / 2);
if (CGRectContainsPoint(_backImageView.frame, newBottomRight) &&
CGRectContainsPoint(_backImageView.frame, newOrigin)) {
pan.view.center = newCenter;
[pan setTranslation:CGPointZero inView:_backImageView];
}
}
- (void)resize:(UIPinchGestureRecognizer *)pinch
{
if (CGRectContainsRect(_backImageView.frame, pinch.view.frame)) {
pinch.view.transform = CGAffineTransformScale(pinch.view.transform, pinch.scale, pinch.scale);
pinch.scale = 1.0;
}
}
- (void)rotate:(UIRotationGestureRecognizer *)rotation
{
if (CGRectContainsRect(_backImageView.frame, rotation.view.frame)) {
rotation.view.transform = CGAffineTransformRotate(rotation.view.transform, rotation.rotation);
rotation.rotation = 0;
}
}
Then, when i touch the "Mix Images" button, this is what it does (
_finalImageView is another UIImageView to see the result image of the mix.
- (IBAction)mixImages:(id)sender
{
UIImage *backImage = _backImageView.image;
UIImage *marioImage = _marioImageView.image;
CGSize finalSize = backImage.size;
CGSize marioSize = marioImage.size;
UIGraphicsBeginImageContext(finalSize);
[backImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
CGPoint relativeOrigin = [_marioImageView convertPoint:_marioImageView.frame.origin toView:_backImageView];
[marioImage drawInRect:CGRectMake(relativeOrigin.x,
relativeOrigin.y,
marioSize.width,
marioSize.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[_finalImageView setImage:finalImage];
}
The problem is that, in the final image, the Mario image is showed without any transformation (no scale, no rotation, anything).
What I'm doing wrong?
Thanks
when you do imageView.image, it just returns the image without any transformation. You need to save all those transformations, and you need to write your own code, to extract the transformed image from the image view.
One way is to screenshot the imageview after the rotation/scale/transform:
//call this method after the rotation/scaling is done i.e in your mix method
-(UIImage *)screenshotImageView :(UIImageView *)imgV{
UIGraphicsBeginImageContext(imgV.frame.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, imgV.frame.origin.x, imgV.frame.origin.y);
[self.view.layer renderInContext:c];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So in this case, your two images to merge method would be:
UIImage *backImage = [self screenshotImageView:_backImageView];
UIImage *marioImage =[self screenshotImageView:_marioImageView];
Use the image returned to merge the images as you want to.
Another way is to get the transformed image based on the imageproperties:
Let's say you flipped the image in imageView, and if you want to get the flipped image:
You can try:
UIImage* flippedImage = [UIImage imageWithCGImage:imgView.image.CGImage
scale:imgView.image.scale
orientation:UIImageOrientationUpMirrored];
Apple's documentation has me a bit confused.
According to the documentation page : https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Nodes/Nodes.html the relationship between the visible area of the scene in points and the size of the scene should be the same according to the sentence :
"The size of the scene specifies the size of the visible portion of
the scene in points".
I created a custom scene that gets its size from the size of the device's screen (in my case I've been testing this for the iPad in portrait mode so the size should be 768 wide by 1024 high).
The following line which is being called in the scene's createContent method
NSLog(#"self.size : %f %f", self.size.width, self.size.height);
Returns
2014-07-15 14:53:28.844 SpriteWalkthrough[15888:90b] self.size : 768.000000 1024.000000
as expected.
However when I try to draw a SKSpriteNode at the position(self.size.width/2,self.size.height/2) the node is drawn in the upper right hand corner of the screen, not in the middle.
Why is this happening?
For other people who may be making similar mistakes drawing SpriteNodes to a scene, take a look at my source code for the scene, and in particular pay attention to the //self.sCar.car.position = self.sCar.position;line.
FoolAroundScene.h
#import <SpriteKit/SpriteKit.h>
#interface FoolAroundScene : SKScene
#end
FoolAroundScene.m
#import "FoolAroundScene.h"
#import "ScrollingBackground.h"
#import "SpriteCar.h"
#define BACKGROUND_NAME #"clouds.jpg"
#interface FoolAroundScene ()
#property BOOL contentCreated;
#property (strong, nonatomic) ScrollingBackground * sbg;
#property (strong, nonatomic) SpriteCar * sCar;
#end
#implementation FoolAroundScene
-(void) didMoveToView:(SKView *)view
{
if (!self.contentCreated) {
[self createContent];
self.contentCreated = !self.contentCreated;
}
}
-(void)createContent
{
self.sbg = [[ScrollingBackground alloc] initWithBackgroundImage:BACKGROUND_NAME size:self.size speed:2.0];
self.sCar = [[SpriteCar alloc] init];
[self addChild:self.sbg];
[self addChild:self.sCar];
self.sCar.position = CGPointMake(self.size.width/2, self.size.height/2);
//self.sCar.car.position = self.sCar.position;
SKSpriteNode * spriteCar = [self.sCar makeCarSprite];
spriteCar.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
NSLog(#"anchor point : %f %f", self.anchorPoint.x, self.anchorPoint.y);
NSLog(#"self.size : %f %f", self.size.width, self.size.height);
NSLog(#"self.scalemode : %d", self.scaleMode);
}
-(void) update:(NSTimeInterval)currentTime
{
[self.sbg update:currentTime];
}
#end
Even though I had set the sCar's position property to the middle of the scene, by setting its child node (the sCar's car property) position to the same value, the child's position will not be the same as its parent's. This is because the child's position is relative to its parent's.
Explained another way : the parent's position in the context of the scene was (384,512) and the child's position in the context of its parent was (384,512). However this means that the child's position in the context of the scene was actually (768,1024), which is why the car was being drawn in the upper right hand corner of the screen.
Also, in case anyone wants the implementation for the sprite car, here it is. A crappily drawn car that can be used to get a grip on how the Sprite Kit works.
SpriteCar.h
#import <SpriteKit/SpriteKit.h>
#interface SpriteCar : SKNode
-(id)init;
-(SKSpriteNode *)makeCarSprite;
#end
SpriteCar.m
#import "SpriteCar.h"
#interface SpriteCar ()
#property (nonatomic, strong) SKSpriteNode * car;
#end
#implementation SpriteCar
-(id) init
{
self = [super init];
if (self) {
self.car = [self makeCarSprite];
[self addChild:self.car];
}
return self;
}
-(SKSpriteNode *) makeCarSprite
{
SKSpriteNode * carBody1 = [[SKSpriteNode alloc] initWithColor:[SKColor redColor] size:CGSizeMake(64.0, 24.0)];
SKSpriteNode * carBody2 = [[SKSpriteNode alloc] initWithColor:[SKColor redColor] size:CGSizeMake(32.0, 32.0)];
SKSpriteNode * wheel1 = [[SKSpriteNode alloc] initWithColor:[SKColor blackColor] size:CGSizeMake(8.0, 8.0)];
SKSpriteNode * wheel2 = [wheel1 copy];
SKSpriteNode * light = [[SKSpriteNode alloc] initWithColor:[SKColor yellowColor] size:CGSizeMake(6.0, 6.0)];
carBody2.position = carBody1.position;
wheel1.position = CGPointMake(30.0, -30);
wheel2.position = CGPointMake(-30.0, -30.0);
light.position = CGPointMake(32.0, 11.0);
[carBody1 addChild:carBody2];
[carBody1 addChild:wheel1];
[carBody1 addChild:wheel2];
[carBody1 addChild:light];
SKAction * hover = [SKAction sequence:#[[SKAction moveByX:0.0 y:5.0 duration:0.1],
[SKAction waitForDuration:0.05],
[SKAction moveByX:0.0 y:-5.0 duration:0.1],
[SKAction waitForDuration:0.05]]];
[carBody1 runAction:[SKAction repeatActionForever:hover]];
return carBody1;
}
#end
I want to achieve a proper perspective "tilt" on two separate side-by-side UIView squares. In the image below the red and green squares are separate UIViews with the same transform applied. Visually this perspective is incorrect (is it?), or at least the superior illusion is shown by the Yellow/Blue square UIViews. The Yellow-Blue squares are actually subviews of a rectangular parent UIView, and the transform was applied to the parent view.
Here's the code:
#interface PEXViewController ()
#property (strong, nonatomic) IBOutlet UIView *redSquare;
#property (strong, nonatomic) IBOutlet UIView *greenSquare;
#property (strong, nonatomic) IBOutlet UIView *yellowSquareBlueSquare;
#end
#implementation PEXViewController
#define TILT_AMOUNT 0.65
-(void)tiltView:(UIView *)slave{
CATransform3D rotateX = CATransform3DIdentity;
rotateX.m34 = -1 / 500.0;
rotateX = CATransform3DRotate(rotateX, TILT_AMOUNT * M_PI_2, 1, 0, 0);
slave.layer.transform = rotateX;
}
- (void)viewDidLoad
{
[super viewDidLoad];
[self tiltView:self.greenSquare];
[self tiltView:self.redSquare];
[self tiltView:self.yellowSquareBlueSquare];
}
#end
1) Is there a simple way to apply a transform(s) to the separate red/green UIViews and achieve the same effect as the "grouped" yellow and blue UIViews? I prefer to keep the views separate, as this is a universal app and the UIViews are not side-by-side in (e.g.) the iPad layout.
2) If #1 is not possible, I am guessing the best thing to do is simply create a parent view that is present in say iPhone, but not present in iPad. Any other alternatives?
I opted for solution #2. I created a short routine that calculates a bounding box based on an array of UIViews, creates a new parent view from the bounding box, then adds the arrayed views as children. I then can apply the transform to the parent view for the desired effect. Here's the code for gathering and adopting children subviews.
-(UIView *)makeParentWithSubviews:(NSArray *)arrayOfViews{
// Creating a bounding box UIView and add the passed UIViews as subview
// "in-place".
CGFloat xMax = -HUGE_VALF;
CGFloat xMin = HUGE_VALF;
CGFloat yMax = -HUGE_VALF;
CGFloat yMin = HUGE_VALF;
for (UIView *myView in arrayOfViews) {
xMin = MIN(xMin, myView.frame.origin.x);
xMax = MAX(xMax, myView.frame.origin.x + myView.frame.size.width);
yMin = MIN(yMin, myView.frame.origin.y);
yMax = MAX(yMax, myView.frame.origin.y + myView.frame.size.height);
}
CGFloat parentWidth = xMax - xMin;
CGFloat parentHeight = yMax - yMin;
CGRect parentFrame = CGRectMake(xMin, yMin, parentWidth, parentHeight);
UIView *parentView = [[UIView alloc] initWithFrame:parentFrame];
// Replace each child's frame
for (UIView *myView in arrayOfViews){
myView.frame = [[myView superview] convertRect:myView.frame toView:parentView];
[myView removeFromSuperview];
[parentView addSubview:myView];
}
parentView.backgroundColor = [UIColor clearColor];
[self.view addSubview:parentView];
return parentView;
}
I am trying to implement two subclasses at one time.
I am using the UIViewController <CLLocationManagerDelegate> subclass in my viewController to be able to work with the gps. While using UIView subclass to draw.
This is a summary of the code I have:
MapViewController.h:
#interface MapViewController: UIViewController <CLLocationManagerDelegate>{
DrawCircle *circleView;
}
#property (nonatomic, retain) DrawCircle *circleView;
#end
DrawCircle.h:
#interface DrawCircle : UIView
-(void)addPoint:(CGPoint)point;
#end
What I was trying to do here was to make DrawCircle a property of MapViewController. However I am still having no luck drawing anything to the screen.
Here is my DrawCircle.m code if it helps at all:
-(id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if(self) {
_points = [[NSMutableArray alloc] init];
}
return self;
}
-(void)addPoint:(CGPoint)point {
//Wrap the point in an NSValue to add it to the array
[_points addObject:[NSValue valueWithCGPoint:point]];
//This will tell our view to redraw itself, calling drawRect
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
for(NSValue *pointValue in _points) {
CGPoint point = [pointValue CGPointValue];
CGContextAddEllipseInRect(ctx, CGRectMake(point.x - 10, point.y - 10, 20, 20));
}
CGContextSetFillColor(ctx, CGColorGetComponents([[UIColor redColor] CGColor]));
CGContextFillPath(ctx);
}
Also as a last bit of information I can give to you all. Here is a snapshot of my map view controller scene.
I have a feeling that the shape is being drawn, but being covered up somehow.
EDIT
After further investigation, it appears that the shapes being drawn are being covered by my UIImage. I was just messing around and removed the UIImage, and my shapes were there. Now the question is, how do i "move to back" my UIImage, so that my shapes are up front?
I'm trying to draw a rectangle which has four circular handles. Here's what it would look like:
o----o
| |
| |
o----o
The circular handles are "hot". In other words, when the user touches it, the handle can be moved around while the rest of the points are anchored. I wanted to know if anyone had an approach for coding this functionality. I'm looking at UIBezierPath to draw the rectangle with circles, but I'm having a hard time thinking about how to allow the user to tap only the circles. I was thinking it may need to be five different UIBezierPath objects, but eventually the UI will consist of multiples of these objects.
Any suggestions would be greatly appreciated. Thanks.
I wouldn't draw it as a single shape with complicated UIBezierPaths at all. I'd think about it as 6 different pieces. A Container, a rectangle, and 4 circles.
I would have a simple container UIView that has a rectangle view and four circular UIViews at its corners. Then put a UIPanGestureRecognizer on each circle. In the gesture handler, move the center of the circle and adjust the underlying rectangle rect by the same amount. This will avoid any complicated paths or math and make it simple add and subtract amounts on the rectangle itself.
Update: Code!
I created a self contained UIView subclass that handles everything. You can create one like so:
HandlesView *view = [[HandlesView alloc] initWithFrame:self.view.bounds];
[view setAutoresizingMask:UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth];
[view setBackgroundColor:[UIColor redColor]];
[self.view addSubview:view];
// A custom property that contains the selected area of the rectangle. Its updated while resizing.
[view setSelectedFrame:CGRectMake(128.0, 128.0, 200.0, 200.0)];
The frame of the view itself is the total draggable area. The selected frame is the inner visible rectangle.
//
// HandlesView.h
// handles
//
// Created by Ryan Poolos on 2/12/13.
// Copyright (c) 2013 Ryan Poolos. All rights reserved.
//
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#interface HandlesView : UIView
#property (nonatomic, readwrite) CGRect selectedFrame;
#end
And here is the implementation.
//
// HandlesView.m
// handles
//
// Created by Ryan Poolos on 2/12/13.
// Copyright (c) 2013 Ryan Poolos. All rights reserved.
//
#import "HandlesView.h"
#interface HandlesView ()
{
UIView *rectangle;
NSArray *handles;
NSMutableArray *touchedHandles;
UIView *circleTL;
UIView *circleTR;
UIView *circleBL;
UIView *circleBR;
}
#end
#implementation HandlesView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
rectangle = [[UIView alloc] initWithFrame:CGRectInset(self.bounds, 22.0, 22.0)];
[self addSubview:rectangle];
// Create the handles and position.
circleTL = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 44.0, 44.0)];
[circleTL setCenter:CGPointMake(CGRectGetMinX(rectangle.frame), CGRectGetMinY(rectangle.frame))];
circleTR = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 44.0, 44.0)];
[circleTR setCenter:CGPointMake(CGRectGetMaxX(rectangle.frame), CGRectGetMinY(rectangle.frame))];
circleBL = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 44.0, 44.0)];
[circleBL setCenter:CGPointMake(CGRectGetMinX(rectangle.frame), CGRectGetMaxY(rectangle.frame))];
circleBR = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 44.0, 44.0)];
[circleBR setCenter:CGPointMake(CGRectGetMaxX(rectangle.frame), CGRectGetMaxY(rectangle.frame))];
handles = #[ circleTL, circleTR, circleBL, circleBR ];
for (UIView *handle in handles) {
// Round the corners into a circle.
[handle.layer setCornerRadius:(handle.frame.size.width / 2.0)];
[self setClipsToBounds:YES];
// Add a drag gesture to the handle.
[handle addGestureRecognizer:[[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:)]];
// Add the handle to the screen.
[self addSubview:handle];
}
}
return self;
}
- (void)setSelectedFrame:(CGRect)selectedFrame
{
[rectangle setFrame:selectedFrame];
[circleTL setCenter:CGPointMake(CGRectGetMinX(rectangle.frame), CGRectGetMinY(rectangle.frame))];
[circleTR setCenter:CGPointMake(CGRectGetMaxX(rectangle.frame), CGRectGetMinY(rectangle.frame))];
[circleBL setCenter:CGPointMake(CGRectGetMinX(rectangle.frame), CGRectGetMaxY(rectangle.frame))];
[circleBR setCenter:CGPointMake(CGRectGetMaxX(rectangle.frame), CGRectGetMaxY(rectangle.frame))];
}
- (CGRect)selectedFrame
{
return rectangle.frame;
}
// Forward the background color.
- (void)setBackgroundColor:(UIColor *)backgroundColor
{
// Set the container to clear.
[super setBackgroundColor:[UIColor clearColor]];
// Set our rectangle's color.
[rectangle setBackgroundColor:[backgroundColor colorWithAlphaComponent:0.5]];
for (UIView *handle in handles) {
[handle setBackgroundColor:backgroundColor];
}
}
- (void)handlePan:(UIPanGestureRecognizer *)gesture
{
// The handle we're moving.
UIView *touchedHandle = gesture.view;
// Keep track of touched Handles.
if (!touchedHandles) {
touchedHandles = [NSMutableArray array];
}
switch (gesture.state) {
case UIGestureRecognizerStateBegan:
[touchedHandles addObject:touchedHandle];
break;
case UIGestureRecognizerStateChanged:
{
CGPoint tranlation = [gesture translationInView:self];
// Calculate this handle's new center
CGPoint newCenter = CGPointMake(touchedHandle.center.x + tranlation.x, touchedHandle.center.y + tranlation.y);
// Move corresponding circles
for (UIView *handle in handles) {
if (handle != touchedHandle && ![touchedHandles containsObject:handle]) {
// Match the handles horizontal movement
if (handle.center.x == touchedHandle.center.x) {
handle.center = CGPointMake(newCenter.x, handle.center.y);
}
// Match the handles vertical movement
if (handle.center.y == touchedHandle.center.y) {
handle.center = CGPointMake(handle.center.x, newCenter.y);
}
}
}
// Move this circle
[touchedHandle setCenter:newCenter];
// Adjust the Rectangle
// The origin and just be based on the Top Left handle.
float x = circleTL.center.x;
float y = circleTL.center.y;
// Get the width and height based on the difference between handles.
float width = abs(circleTR.center.x - circleTL.center.x);
float height = abs(circleBL.center.y - circleTL.center.y);
[rectangle setFrame:CGRectMake(x, y, width, height)];
[gesture setTranslation:CGPointZero inView:self];
}
break;
case UIGestureRecognizerStateEnded:
[touchedHandles removeObject:touchedHandle];
break;
default:
break;
}
}
#end
This is only a proof of concept. There are a lot of missing caveats like being able to drag outside the box, multitouch complications, negative sizes. All these problems can be handled very differently and are the secret sauce that makes something like this go from a nice idea to a beautiful custom interface. I'll leave that part up to you. :)
You will want to store the circle bezier paths in your class for when you implement gesture recognizers.
There is an Apple document describing how to implement a UIView or UIControl that accepts touch events with pictures and sample code.
http://developer.apple.com/library/ios/#documentation/EventHandling/Conceptual/EventHandlingiPhoneOS/multitouch_background/multitouch_background.html#//apple_ref/doc/uid/TP40009541-CH5-SW9