I am an iOS programming newbie (reading several books on the subject simultaneously) and I would like to develop a (yet another) word game.
Coming from Flash/Flex programming background I was first expecting the tiles to be best bundled as gif or png assets.
But then I have taken a look (by using iFunbox) at the popular word games (Lexulous, Wordament, Words with Friends, Ruzzle, ...) and none of them is doing that:
That is none of the many apps I've looked at includes any letter pieces as images.
So my question is what would be the recommended approach (in Xcode 5 and with no additional SDKs like Cocos2d or Sparrow) to create a letter tile for a word game?
On a tile I'd like to have
the center-aligned letter (obviously!),
then an index in a corner displaying the letter value
and then another index for a total word value
When touched I'd like to make the tile a bit larger and add a shadow underneath it.
Should my tile class be a UIView (can they be dragged around, grow and have shadows?)
Should I use a .nib file for the tile?
For dragging I have found a good suggestions already: Dragging an UIView inside UIScrollView
But what I really would like to know here is: if UIView would make a good tile (performance- and feature-wise) or should I go for another base class (like maybe some shapes)?
Yes UIView would be a good container.
Create a subclass of UIView, say TileView, put in some labels, image view and a button over it, override buttons UIControlEventTouchDown, UIControlEventTouchUpInside, UIControlEventTouchDragInside events to help U navigate the view in its parent. Put the TileView in some container (your view controller view or where U would like it to be) and this is basically it.
-(void)btnPressed:(id)sender withEvent:(UIEvent *) event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
dx = 0;
dy = 0;
oldPoint = point;
}
-(void)btnRelesed:(id)sender
{
// stop moving code
}
-(void)btnDragged:(id)sender withEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
dx = point.x - oldPoint.x;
dy = point.y - oldPoint.y;
oldPoint = point;
// set tile view center position using
CGPoint ptCenter;
ptCenter = self.view.center;
ptCenter.x = ptCenter.x + dx;
ptCenter.y = ptCenter.y + dy;
self.view.center = ptCenter;
}
self.view is your TileView and its self.view cause U have ovrriden UIView class ;)
Related
So I'm making a minesweeper clone for iOS, and I have an array of UIButtons containing 135 buttons (the minesweeper board). It looks great and theoretically should work great. But I was having trouble detecting which button was being hit. I tried working around the problem by using this code;
UITouch *touched = [[event allTouches] anyObject];
CGPoint location = [touched locationInView:touched.view];
NSLog(#"x=%.2f y=%.2f", location.x, location.y);
int pointX = location.x;
int pointY = location.y;
My goal was to grab the coordinates of the touch and then use some basic math to figure out which button was being pressed. However, it doesn't work. At all. No button is pressed, no function runs, essentially nothing happens. I'm left with a minesweeper board that you can't interact with. Any ideas?
Assign a separate number to the tag of each button. Use the button's target, not the UITouch code. When you get a buttonPress, query the tag.
You could subclass the buttons and then program the what needs to happen when a touch occurs in a button inside of that subclass.
The UIButton * can be accessed by calling:
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
on self (a UIView * I imagine). So I suppose you can set the button to the pushed state, and when touchesEnded: is called, set it back.
I want to make a custom gesture recognizer with three fingers. Which is similar to unpinch gesture recognizer.
All I need is an idea about how to recognize it.
My gesture needs to recognize three fingers with three directions. For example:
I hope images makes sense. I need to make it flexible for any three opposite directions. Thanks in advance. Any help would be appreciated.
I am aware about the subclass methods and I've created custom gestures already with single finger like semicircle, full circle. I need a coding idea about how to handle that.
You need to create a UIGestureRecognizer subclass of your own (let's call it DRThreeFingerPinchGestureRecognizer) and in it to implement:
– touchesBegan:withEvent:
– touchesMoved:withEvent:
– touchesEnded:withEvent:
– touchesCancelled:withEvent:
These methods are called when touches are accepted by the system and possibly before they are sent to the view itself (depending on how you setup the gesture recognizer). Each of these methods will give you a set of touches, for which you can check the current location in your view and previous location. Since pinch gesture is relatively very simple, this information is enough for you to test if the user is performing a pinch, and fail the test (UIGestureRecognizerStateFailed). If state was not failed by – touchesEnded:withEvent:, you can recognize the gesture.
I say pinch gestures are simple, because you can easily track each touch and see how it moves compared to other touches and itself. If a threshold of an angle is passed and broken, you fail the test, otherwise you allow it to continue. If touches do not move in separate angles to each other, you fail the test. You will have to play with what angles of the vectors are acceptable, because 120 degrees are not optimal for the three most common fingers (thumb + index + middle fingers). You may just want to check that the vectors are not colliding.
Make sure to read the UIGestureRecognizer documentation for an in-depth look at the various methods, as well as subclassing notes.
Quick note for future readers: the way you do an unpinch/pinch with three fingers is add the distances ab,bc,ac.
However if your graphics package just happens to have on hand "area of a triangle" - simply use that. ("It saves one whole line of code!")
Hope it helps.
All you need to do is track:
the distance between the three fingers!
Simply add up "every" permutation
(Well, there's three .. ab, ac and cb. Just add those; that's all there is to it!)
When that value, say, triples from the start value, that's an "outwards triple unpinch".
... amazingly it's that simple.
Angles are irrelevant.
Footnote if you want to be a smartass: this applies to any pinch/unpinch gesture, 2, 3 fingers, whatever:
track the derivative of the sum-distance (I mean to say, the velocity) rather than the distance. (Bizarrely this is often EASIER TO DO! because it is stateless! you need only look at the previous frame!!!!)
So in other words, the gesture is trigger when the expansion/contraction VELOCITY of the fingers reaches a certain value, rather than a multiple of the start value.
More interesting footnote!
However there is a subtle problem here: whenever you do anything like this (any platform) you have to be careful to measure "on the glass".
IF You are just doing distance (ie, my first solution above) of course everything cancels out and you can just say "if it doubles" (in pixels -- points -- whatever the hell). BUT if you are doing velocity as part of the calculation in any gesture, then somewhat surprisingly, you have to literally find the velocity in meters per second in the real world, which sounds weird at first! Of course you can't do this exactly (particularly with android) coz glass sizes vary somewhat, but you have to get close to it. Here is a long post discussing this problem http://answers.unity3d.com/questions/292333/how-to-calculate-swipe-speed-on-ios.html In practice you usually have to make do with "screen-widths-per-second" which is pretty good. (But this may be vastly different on phones, large tablets, and these days "surface" type things. on your whole iMac screen, 0.1 screenwidthspersecond may be fast, but on an iPhone that is nothing, not a gesture.)
Final footnote! I simply don't know if Apple use "distance multiple" or "glass velocity" in their gesture recognition, or also likely is some subtle mix. I've never read an article from them commenting on it.
Another footnote! -- if for whatever reason you do want to find the "center" of the triangle (I mean the center of the three fingers). This is a well-travelled problem for game programmers because, after all, all 3D mesh is triangles.
Fortunately it's trivial to find the center of three points, just add the three vectors and divide by three! (Confusingly this even works in higher dimensions!!)
You can see endless posts on this issue...
http://answers.unity3d.com/questions/445442/calculate-uv-at-center-of-triangle.html
http://answers.unity3d.com/questions/424950/mid-point-of-a-triangle.html
Conceivably, if you were incredibly anal, you would want the "barycenter" which is more like the center of mass, just google if you want that.
I think track angles is leading you down the wrong path. I think it's likely a more flexible and intuitive gesture if you don't constrain it based on the angles between the fingers. It'll be less error prone if you just deal with it as a three-fingered pinch regardless of how the fingers move relative to each other. This is what I'd do:
if(presses != 3) {
state = UIGestureRecognizerStateCancelled;
return;
}
// After three fingers are detected, begin tracking the gesture.
state = UIGestureRecognizerStateBegan;
central_point_x = (point1.x + point2.x + point3.x) / 3;
central_point_y = (point1.y + point2.y + point3.y) / 3;
// Record the central point and the average finger distance from it.
central_point = make_point(central_point_x, central_point_y);
initial_pinch_amount = (distance_between(point1, central_point) + distance_between(point2, central_point) + distance_between(point3, central_point)) / 3;
Then on each update for touches moved:
if(presses != 3) {
state = UIGestureRecognizerStateEnded;
return;
}
// Get the new central point
central_point_x = (point1.x + point2.x + point3.x) / 3;
central_point_y = (point1.y + point2.y + point3.y) / 3;
central_point = make_point(central_point_x, central_point_y);
// Find the new average distance
pinch_amount = (distance_between(point1, central_point) + distance_between(point2, central_point) + distance_between(point3, central_point)) / 3;
// Determine the multiplicative factor between them.
difference_factor = pinch_amount / initial_pinch_amount
Then you can do whatever you want with the difference_factor. If it's greater than 1, then the pinch has moved away from the center. If it's less than one, it's moved towards the center. This will also give the user the ability to hold two fingers stationary and only move a third to perform your gesture. This will address certain accessibility issues that your users may encounter.
Also, you could always track the incremental change between touch move events, but they won't be equally spaced in time and I suspect you'll have more troubles dealing with it.
I also apologize for the pseudo-code. If something isn't clear I can look at doing up a real example.
Simple subclass of UIGestureRecognizer. It calculates the relative triangular center of 3 points, and then calculates the average distance from that center, angle is not important. You then check the average distance in your Gesture Handler.
.h
#import <UIKit/UIKit.h>
#import <UIKit/UIGestureRecognizerSubclass.h>
#interface UnPinchGestureRecognizer : UIGestureRecognizer
#property CGFloat averageDistanceFromCenter;
#end
.m
#import "UnPinchGestureRecognizer.h"
#implementation UnPinchGestureRecognizer
-(CGPoint)centerOf:(CGPoint)pnt1 pnt2:(CGPoint)pnt2 pnt3:(CGPoint)pnt3
{
CGPoint center;
center.x = (pnt1.x + pnt2.x + pnt3.x) / 3;
center.y = (pnt1.y + pnt2.y + pnt3.y) / 3;
return center;
}
-(CGFloat)averageDistanceFromCenter:(CGPoint)center pnt1:(CGPoint)pnt1 pnt2:(CGPoint)pnt2 pnt3:(CGPoint)pnt3
{
CGFloat distance;
distance = (sqrt(fabs(pnt1.x-center.x)*fabs(pnt1.x-center.x)+fabs(pnt1.y-center.y)*fabs(pnt1.y-center.y))+
sqrt(fabs(pnt2.x-center.x)*fabs(pnt2.x-center.x)+fabs(pnt2.y-center.y)*fabs(pnt2.y-center.y))+
sqrt(fabs(pnt3.x-center.x)*fabs(pnt3.x-center.x)+fabs(pnt3.y-center.y)*fabs(pnt3.y-center.y)))/3;
return distance;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if ([touches count] == 3) {
[super touchesBegan:touches withEvent:event];
NSArray *touchObjects = [touches allObjects];
CGPoint pnt1 = [[touchObjects objectAtIndex:0] locationInView:self.view];
CGPoint pnt2 = [[touchObjects objectAtIndex:1] locationInView:self.view];
CGPoint pnt3 = [[touchObjects objectAtIndex:2] locationInView:self.view];
CGPoint center = [self centerOf:pnt1 pnt2:pnt2 pnt3:pnt3];
self.averageDistanceFromCenter = [self averageDistanceFromCenter:center pnt1:pnt1 pnt2:pnt2 pnt3:pnt3];
self.state = UIGestureRecognizerStateBegan;
}
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
if ([touches count] == 3)
{
NSArray *touchObjects = [touches allObjects];
CGPoint pnt1 = [[touchObjects objectAtIndex:0] locationInView:self.view];
CGPoint pnt2 = [[touchObjects objectAtIndex:1] locationInView:self.view];
CGPoint pnt3 = [[touchObjects objectAtIndex:2] locationInView:self.view];
CGPoint center = [self centerOf:pnt1 pnt2:pnt2 pnt3:pnt3];
self.averageDistanceFromCenter = [self averageDistanceFromCenter:center pnt1:pnt1 pnt2:pnt2 pnt3:pnt3];
self.state = UIGestureRecognizerStateChanged;
return;
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:touches withEvent:event];
self.state = UIGestureRecognizerStateEnded;
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:touches withEvent:event];
self.state = UIGestureRecognizerStateFailed;
}
#end
implementation of Gesture, I have a max avg distance set to start, and then a minimum to end, you can also check during changed as well:
-(IBAction)handleUnPinch:(UnPinchGestureRecognizer *)sender
{
switch (sender.state) {
case UIGestureRecognizerStateBegan:
//If you want a maximum starting distance
self.validPinch = (sender.averageDistanceFromCenter<75);
break;
case UIGestureRecognizerStateEnded:
//Minimum distance from relative center
if (self.validPinch && sender.averageDistanceFromCenter >=150) {
NSLog(#"successful unpinch");
}
break;
default:
break;
}
}
How would I create a program so a dot starts in the center, and when I click the screen the dot follows where I clicked? Not as in teleports to it, I mean like changes it's coordinates towards it slightly every click. I get how I could do it in theory, as in like
if (mouseIsClicked) {
[mouseX moveX];
[mouseY moveY];
}
And make the class that mouseX and mouseY are have some methods to move closer to where the mouse is, but I just don't know any specifics to actually make it happen. Heck, I don't even know how to generate a dot in the first place! None of those guides are helping at all. I really want to learn this language though. I've been sitting at my mac messing around trying to get anything to work, but nothing's working anywhere near how I want it to.
Thanks for helping a total newbie like me.
If you are going to subclass UIView, you can use the touchesBegan/touchesMoved/touchesEnded methods to accomplish this. Something like:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint p = [touch locationInView:self];
//slightly update location of your object with p.x and p.y cgpoints
[self setNeedsDisplay];
}
-(void)drawRect{
//draw your object with updated coordinates
}
You can create a dot and move it around based on taps all within your UIViewController subclass.
Make your dot by creating a UIView configured to draw the way you want - look into CALayer and setting dotview.layer.cornerRadius to make it be round (alternately you can make a UIView subclass that overrides drawRect: to make the right CoreGraphics calls to draw what you want). Set dotview.center to position it.
Create a UITapGestureRecognizer with an action method in your view controller that updates dotview.center as desired. If you want it animated, simply set the property within a view animation call & animation block like this:
[UIView animateWithDuration:0.3 animations:^{
dotview.center = newposition;
}];
You can download the sample code here that will show you general iOS gestures. In particular it has a sample that shows how to drag and drop UIViews or how to swipe them around. The sample code includes both driven and fire-n-forget animations. Check it out, its commented and I'd be happy to answer any specific questions you have after reviewing the code.
Generating a simple circle
// Above Implementation
#import <QuartzCore/QuartzCore.h>
// In viewDidLoad or somewhere
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(32.0, 32.0, 64.0, 64.0)];
[circleView setBackgroundColor:[UIColor redColor]];
[circleView.layer setCornerRadius:32.0];
self.view addSubview:circleView];
I would like to know how to implement an animation where the user can slide one view over another using his finger. To simplify things I will post some pictures to explain what I mean.
At first we have a view which has a settings icon (that pink thingie in the upper left corner). The view can can be manipulated by any means necessary - including swiping. See picture 1.
When the user presses the settings button the view slides away and reveals settings view underneath. To do this I just use [UIView beginAnimations:nil context:nil] adjust the main view frame and commit animations. The situation looks like this: (settings are purple, main view is still the same colour)
Now when the user decides he want the main screen back he will have to grab it and slide it back into its place. HOW? Now I have been using UITapGestureRecognizer which would trigger a reverse animation so that the view slid back in its original place. That was messy and the user was still able to manipulate that small section of the view visible.
I want the view to follow the user's finger and slide and scale its way back in its place. The main view should not be manipulatable while moving. To understand what I mean you can check the Yahoo Weather app which has a similar thing going on - except that my view would also shrink a bit vertically. The third picture shows things "in motion":
Please provide links or code that can help me accomplish this.
Cheers, Jan.
Look at this code:
MMDrawerController
MSMatrixController
And you can try this
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[touches allObjects] objectAtIndex:0];
CGPoint touchPt = [touch locationInView:self];
float deltaX = touchPt.x - beginPt.x; //where beginPt is CGPoint variable which was set in touches began
float deltaY = touchPt.y - beginPt.y;
if (deltaX < 0) {
deltaX = -deltaX;
}
if (deltaY < 0) {
deltaY = -deltaY;
}
CGPoint toPt;
toPt.x = deltaX;
toPt.y = deltaY;
yourMovedView.center = toPt;
}
I know there are better ways to achieve drawing tools using CoreGraphics and what-not, I'm using this UIView method because I want to achieve a pixelated effect and I have some other custom stuff in mind that would be difficult to do with just the built in CG code.
I have a grid of 400 UIViews to create my pixelated canvas...
The user selects a paint color and begins to drag their finger over the canvas of 400 UIViews, as their finger hits a UIView the color of that UIView (aka pixel) will change to the color the user has selected...
Right now I have coded the whole thing and everything is working but it is laggy (because I am using touchesMoved) and many points are skipped...
I am currently using touchesMoved and am moving a UIView to the fingers location(I am calling this the fingerLocationRect).
When I update a pixel's (1 of 400 miniature UIViews) color I am doing so by checking if the fingerLocationRect is intersection any one of my UIView rects. If they are I update the color.
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPosition = [touch locationInView:self.view];
fingerLocationRect.center = CGPointMake(currentPosition.x, currentPosition.y);
int tagID = 1;
while (tagID <= 400) {
if(CGRectIntersectsRect(fingerLocationRect.frame, [drawingView viewWithTag:tagID].frame)) {
[drawingView viewWithTag:tagID].backgroundColor = [UIColor colorWithRed:redVal/255.f green:greenVal/255.f blue:blueVal/255.f alpha:1.f];
}
tagID ++;
}
}
The problem is that touchesMoved is not updated very often... so if I drag my finger across the canvas only like 5 of the pixels change instead of the 40 I actually touched.
I'm wondering if using CGRectIntersectsRect with touchesMoved is the best method to detect if the user slides of a UIView... Right now it is not updating nearly fast enough and I am getting the result on the left instead of the result on the right. (I only achieve the effect on the right if I move my finger VERY slowly...)
All that said, is there a method for detecting when a user slides over a UIView other than using touchesMoved? If not, any ideas on how to grab all of the UIViews in-between the touch locations returned by touchesMoved? I have them all having tags of 1-400 (the first row is 1-10, second row is 11-21, etc.)
P.s. if a panGestureRecognizer updated more frequently than a touchesMoved?
To get an idea of what the 400 UIViews look like here they all are with arc4random based colors:
You are looping through 400 views every time touchesMoved: is called. You just need to do a little math to determine which view in the grid is being touched. Assuming that drawingView is the "canvas" view that contains all 400 "pixels" and the tags are ordered from the top-left to the bottom-right then:
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint point = [touch locationInView:drawingView];
NSInteger column = floorf(20 * point.x / drawingView.bounds.size.width); //20 is the number of column in the canvas
NSInteger row = floorf(20 * point.y / drawingView.bounds.size.height); //20 is the number of rows in the canvas
[viewWithTag:column + 20*row].backgroundColor = [UIColor colorWithRed:redVal/255.f green:greenVal/255.f blue:blueVal/255.f alpha:1.f];
}
This method performance improvement will allow cocoa to make more frequent calls to touchesMoved: