I needed to convert my code to ARC. I have an CCArray that I use to draw a path. I fill the objects of CCArray values from a different class.
Problem is after converting to ARC, CCArray returns always null
I can not see what I am doing wrong.
Ladybug.h
#interface Ladybug : CCSprite <CCTargetedTouchDelegate>{
CCArray *linePathPosition;
}
#property (nonatomic, strong) CCArray *linePathPosition;
#end
Ladybug.m
#synthesize linePathPosition;
-(id) init
{
if( (self=[super init] )) {
self.linePathPosition = [[CCArray alloc] init];
}
return self;
}
-(void) updatePosition:(CGPoint) position
{
[self.linePathPosition addObject:[NSValue valueWithCGPoint:position]];
NSLog(#"line path %#",linePathPosition);
}
-(void) breakMoveLadyBug
{
[self.linePathPosition removeAllObjects];
}
In main .m
- (void)ccTouchMoved:(UITouch *)touch withEvent:(UIEvent *)event
{
Ladybug *ladybug1 = (Ladybug *)[self getChildByTag:99];
CCMotionStreak* streak = (CCMotionStreak *)[self getChildByTag:999];
CGPoint touchLocation = [touch locationInView: [touch view]];
CGPoint curPosition = [[CCDirector sharedDirector] convertToGL:touchLocation];
if (ladybug1.isSelected) {
streak.position = curPosition;
[ladybug1 updatePosition:curPosition];
NSLog(#"Cur position %#",NSStringFromCGPoint(curPosition));
if (!ladybug1.isMoving) {
[ladybug1 startMoveLadyBug];
}
}
}
Log:
Cur position {331, 110}
line path (null)
What am I doing wrong? What is the proper way to define and init CCArray with ARC?
This isn't a problem with ARC or CCArray. The problem enlies in your understanding of objective-c.
To solve your problem, where ever you are doing this line [self addChild:ladybug1 z:999 tag:99]. Do this instead:
Ladybug *ladybug1 = [[Ladybug alloc] init];
[self addChild:ladybug1 z:0 tag:99];
Note: this is done outside of the ccTouchMoved function, in some function that adds the ladybug objects to the scene/layer.
Then your ccTouchMoved function will have an allocated ladybug object to work with.
The problem being that if you never call the init function on a ladybug object, the CCArray will never get allocated so everything will be null. You will then try to add points to an null array, and that will do nothing. You can do all this tag stuff you want (I personally have no idea why you are doing that), but you need to specify the tag for the ladybug object. So add it to the CCLayer/CCScene (w/e you are using), and set its tag. Then you can use the getChildByTag function.
Related
i have a transparent UIScrollView on top of another view.
the scroll view has content - text and images, used to display info.
the view behind it has some images that the user should be able to tap on.
and the content over them is scrollable using the mentioned scrollview.
i want to be able to normally use the scroll view (no zoom though), but when that scroll view is not actually scrolling to let the tap events through to the view behind it.
using a combination of the touch and scroll events i can determine when to let the taps through.
but the view behind it still it does not receive them.
i have tried using something like this for all touch events:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(#"touchesBegan %#", (_isScrolling ? #"YES" : #"NO"));
if(!_isScrolling)
{
NSLog(#"sending");
[self.viewBehind touchesBegan:touches withEvent:event];
}
}
but it does not work.
also in my case i cannot really apply the hitTest and pointInside solutions, given the use case that i have.
First off UIScrollViews only inherently recognize UIPanGestureRecognizers and UIPinchGestureRecognizers so you need to add a UITapGestureRecognizer to the UIScrollView so it can recognize any tapping gestures as well:
UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
// To prevent the pan gesture of the UIScrollView from swallowing up the
// touch event
tap.cancelsTouchesInView = NO;
[scrollView addGestureRecognizer:tap];
Then once you receive that tap gesture and the handleTap: action is triggered, you can use locationInView: to detect whether the tap gesture's position is in fact within the frame of one of the images below your scroll view, for example:
- (void)handleTap:(UITapGestureRecognizer *)recognizer {
// First get the tap gesture recognizers's location in the entire
// view's window
CGPoint tapPoint = [recognizer locationInView:self.view];
// Then see if it falls within one of your below images' frames
for (UIImageView* image in relevantImages) {
// If the image's coordinate system isn't already equivalent to
// self.view, convert it so it has the same coordinate system
// as the tap.
CGRect imageFrameInSuperview = [image.superview convertRect:image toView:self.view]
// If the tap in fact lies inside the image bounds,
// perform the appropriate action.
if (CGRectContainsPoint(imageFrameInSuperview, tapPoint)) {
// Perhaps call a method here to react to the image tap
[self reactToImageTap:image];
break;
}
}
}
This way, the above code is only performed if a tap gesture is recognized, your program only reacts to a tap on the scroll view if the tap location falls within an image; otherwise, you can just scroll your UIScrollView as usual.
Here I present my complete solution that:
Forwards touches directly to views instead of calling a control event.
User can specify which classes to forward.
User can specify which views to check if forward is needed.
Here the interface:
/**
* This subclass of UIScrollView allow views in a deeper Z index to react when touched, even if the scrollview instance is in front of them.
**/
#interface MJForwardingTouchesScrollView : UIScrollView
/**
* Set of Class objects. The scrollview will events pass through if the initial tap is not over a view of the specified classes.
**/
#property (nonatomic, strong) NSSet <Class> *forwardsTouchesToClasses;
/**
* Optional array of underlying views to test touches forward. Default is nil.
* #discussion By default the scroll view will attempt to forward to views located in the same self.superview.subviews array. However, optionally by providing specific views inside this property, the scroll view subclass will check als among them.
**/
#property (nonatomic, strong) NSArray <__kindof UIView*> *underlyingViews;
#end
And the Implementation:
#import "MJForwardingTouchesScrollView.h"
#import "UIView+Additions.h"
#implementation MJForwardingTouchesScrollView
- (instancetype)initWithCoder:(NSCoder *)aDecoder
{
self = [super initWithCoder:aDecoder];
if (self != nil)
{
_forwardsTouchesToClasses = [NSSet setWithArray:#[UIControl.class]];
}
return self;
}
- (instancetype)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self != nil)
{
_forwardsTouchesToClasses = [NSSet setWithArray:#[UIControl.class]];
}
return self;
}
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
BOOL pointInside = [self mjz_mustCapturePoint:point withEvent:event];
if (!pointInside)
return NO;
return [super pointInside:point withEvent:event];
}
#pragma mark Private Methods
- (BOOL)mjz_mustCapturePoint:(CGPoint)point withEvent:(UIEvent*)event
{
if (![self mjz_mustCapturePoint:point withEvent:event view:self.superview])
return NO;
__block BOOL mustCapturePoint = YES;
[_underlyingViews enumerateObjectsUsingBlock:^(__kindof UIView * _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
if (![self mjz_mustCapturePoint:point withEvent:event view:obj])
{
mustCapturePoint = NO;
*stop = YES;
}
}];
return mustCapturePoint;
}
- (BOOL)mjz_mustCapturePoint:(CGPoint)point withEvent:(UIEvent *)event view:(UIView*)view
{
CGPoint tapPoint = [self convertPoint:point toView:view];
__block BOOL mustCapturePoint = YES;
[view add_enumerateSubviewsPassingTest:^BOOL(UIView * _Nonnull testView) {
BOOL forwardTouches = [self mjz_forwardTouchesToClass:testView.class];
return forwardTouches;
} objects:^(UIView * _Nonnull testView, BOOL * _Nullable stop) {
CGRect imageFrameInSuperview = [testView.superview convertRect:testView.frame toView:view];
if (CGRectContainsPoint(imageFrameInSuperview, tapPoint))
{
mustCapturePoint = NO;
*stop = YES;
}
}];
return mustCapturePoint;
}
- (BOOL)mjz_forwardTouchesToClass:(Class)class
{
while ([class isSubclassOfClass:NSObject.class])
{
if ([_forwardsTouchesToClasses containsObject:class])
return YES;
class = [class superclass];
}
return NO;
}
#end
The only extra code used is inside the UIView+Additions.h category, which contains the following method:
- (void)add_enumerateSubviewsPassingTest:(BOOL (^_Nonnull)(UIView * _Nonnull view))testBlock
objects:(void (^)(id _Nonnull obj, BOOL * _Nullable stop))block
{
if (!block)
return;
NSMutableArray *array = [NSMutableArray array];
[array addObject:self];
while (array.count > 0)
{
UIView *view = [array firstObject];
[array removeObjectAtIndex:0];
if (view != self && testBlock(view))
{
BOOL stop = NO;
block(view, &stop);
if (stop)
return;
}
[array addObjectsFromArray:view.subviews];
}
}
Thanks
The problem is that your UIScrollView is consuming the event. To pass it through, you would have to disable the user's interaction on it, but it wouldn't scroll then. If you have the touches location however, you can calculate where would that fall on the underlying view, using the convertPoint:toView: method, and call a mathod on it by passing on the CGPoint. From there, you can calculate which image was tapped.
I have a CCNode that contains multiple CCSprite children.
I would like to receive touch events in my parent CCNode if any of the children have been touched.
This behaviour seems like it should be supported, I may be missing something.
My solution is to setUserInteractionEnabled = YES on all children and bubble the event up to the parent.
I do this by subclassing the CCSprite class overriding their method :
- (void) touchBegan:(UITouch *)touch withEvent:(UIEvent *)event
{
[super touchBegan:touch withEvent:event];
}
I am wondering if there is a more elegant, simple and generic way of accomplishing the same behaviour ?
You could override hitTestWithWorldPos: of your 'containing' node, either calling hitTestWithWorldPos on specific children or, iterating through all children as you see fit. Perhaps something like this:
-(BOOL) hitTestWithWorldPos:(CGPoint)pos
{
BOOL hit = NO;
hit = [super hitTestWithWorldPos:pos];
for(CCNode *child in self.children)
{
hit |= [child hitTestWithWorldPos:pos];
}
return hit;
}
edit: just to be clear, then you would only need to setUserInteractionEnabled for the container, and only process the touch using the touch events of the containing node.
edit2:
so, I thought about it for a bit more and here's a quick category you can add that will enable a quick hit test for all children of a node recursively.
CCNode+CCNode_RecursiveTouch.h
#import "CCNode.h"
#interface CCNode (CCNode_RecursiveTouch)
{
}
-(BOOL) hitTestWithWorldPos:(CGPoint)worldPos forNodeTree:(id)parentNode shouldIncludeParentNode:(BOOL)includeParent;
#end
CCNode+CCNode_RecursiveTouch.m
#import "CCNode+CCNode_RecursiveTouch.h"
#implementation CCNode (CCNode_RecursiveTouch)
-(BOOL) hitTestWithWorldPos:(CGPoint)worldPos forNodeTree:(id)parentNode shouldIncludeParentNode:(BOOL)includeParent
{
BOOL hit = NO;
if(includeParent) {hit |= [parentNode hitTestWithWorldPos:worldPos];}
for( CCNode *cnode in [parentNode children] )
{
hit |= [cnode hitTestWithWorldPos:worldPos];
(cnode.children.count)?(hit |= [self hitTestWithWorldPos:worldPos forNodeTree:cnode shouldIncludeParentNode:NO]):NO; // on recurse, don't process parent again
}
return hit;
}
#end
usage would just be .. in the containing class, override hitTestWithWorldPos like this:
-(BOOL) hitTestWithWorldPos:(CGPoint)pos
{
BOOL hit = NO;
hit = [self hitTestWithWorldPos:pos forNodeTree:self shouldIncludeParentNode:NO];
return hit;
}
and of course, don't forget to include the category header.
-(void) touchBegan:(UITouch *)touch withEvent:(UIEvent *)event
{
//Do whatever you like...
//Bubble the event up to the next responder...
[[[CCDirector sharedDirector] responderManager] discardCurrentEvent];
}
I have the regular OpenGL / EAGL setup going on:
#interface EAGLView : UIView {
#public
EAGLContext* context;
}
#property (nonatomic, retain) EAGLContext* context;
#end
#implementation EAGLView
#synthesize context;
+ (Class)layerClass {
return [CAEAGLLayer class];
}
#end
#interface EAGLViewController : UIViewController {
#public
EAGLView* glView;
}
#property(nonatomic, retain) EAGLView* glView;
#end
#implementation EAGLViewController
#synthesize glView;
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
for (UITouch* touch in touches) {
CGPoint location = [touch locationInView:glView];
int index;
for (index = 0; index < gCONST_CURSOR_COUNT; ++index) {
if (sCursor[index] == NULL) {
sCursor[index] = touch;
break;
}
}
}
[super touchesBegan:touches withEvent:event];
}
That implementation includes corresponding touchesEnded/Canceled/Moved as well. The code fully works and tracks well.
I also make sure that I'm giving proper values for everything:
sViewController = [EAGLViewController alloc];
CGRect rect = [[UIScreen mainScreen] applicationFrame];
sViewController.glView = [[EAGLView alloc] initWithFrame:CGRectMake(rect.origin.x, rect.origin.y, rect.size.width, rect.size.height)];
Assert(sViewController.glView);
sViewController.glView.userInteractionEnabled = YES;
sViewController.glView.multipleTouchEnabled = YES;
sViewController.glView.exclusiveTouch = YES;
It all compiles just fine, but I'm never receiving more than one UITouch. I don't mean in a single touchesBegan, but the index never goes past 0. I also set a breakpoint for the second time it enters that function, and putting two fingers on doesn't make it trigger.
If you want to detect multiple touches (and/or distinguish between a one finger, two finger etc. touch), try using a UIPanGestureRecognizer. When you set it up, you can specify the minimum and maximum number of touches. Then attach it to the view where you want to detect the touches. When you receive events from it, you can ask it how many touches it received and branch accordingly.
Here's the apple documentation:
http://developer.apple.com/library/ios/#documentation/uikit/reference/UIPanGestureRecognizer_Class/Reference/Reference.html
If you do this, you might not need to use the touchesBegan/Moved/Ended methods at all and, depending on how you set up the gesturerecognizer, touchesBegan/Moved/Ended may never get called.
Use [event allTouches] in place of touches. touches represents only the touches that have 'changed'. From the apple docs:
If you are interested in touches that have not changed since the last
phase or that are in a different phase than the touches in the
passed-in set, you can find those in the event object. Figure 3-2
depicts an event object that contains touch objects. To get all of
these touch objects, call the allTouches method on the event object.
It seems all I was missing was this:
sViewController.view = sViewController.glView;
The application responds to touches with the following method - invoking movePlayer:
- (void)ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
[self.player stopAllActions];
UITouch *touch = [touches anyObject];
CGPoint touchLocation = [touch locationInView:touch.view];
touchLocation = [[CCDirector sharedDirector] convertToGL:touchLocation];
touchLocation = [self convertToNodeSpace:touchLocation];
CGPoint diff = ccpSub(touchLocation, self.player.position);
self.distanceToMovePlayer = sqrtf((diff.x*diff.x)+(diff.y*diff.y));
self.playerDestination = touchLocation;
[self movePlayer];
}
movePlayer is defined here. It runs the CCAction that moves the sprite to the touch.
- (void)movePlayer{
CCAction *movePlayer = [CCMoveTo actionWithDuration:self.distanceToMovePlayer/100 position:self.playerDestination];
self.playerMovement = movePlayer;
[self.player runAction:self.playerMovement];
}
I have an invisible TMX Layer called meta on the TMXTileMap that indicates a wall or boundary with the following method that runs every frame:
- (void)checkCollisions:(CGPoint)position{
CGPoint tileCoordinate = [self tileCoordForPosition:position];
int tileGID = [self.meta tileGIDAt:tileCoordinate];
if(tileGID == 49){
NSDictionary *properties = [self.meta properties];
if(properties){
NSString *collision = properties[#"Collidable"];
if(collision && [collision isEqualToString:#"True"]){
[self.player stopAction:self.playerMovement];
}
}
Whenever the sprite touches a boundary, the action stops and the sprite is simply stuck there because the action is immediately stopping whenever it starts as the sprite is still in the boundary.
I have tried setting the collision method to return a boolean which is then tested in the CCMoveTo. Is there a way to call a selector each iteration of a CCAction? Something like CCCallBlockN that runs each frame of the action.
Well, i would probably schedule a selector for a CCAction duration, for a per-frame call back. Assuming you also run some kind of animation on the player sprite, and new in cocos2d 2.0+, you could use a CCAnimation, whereby you could register for a notification to be served with some user-data for each frame.
from the CCAnimation.h file :
/** A CCAnimationFrameDisplayedNotification notification will be broadcasted
* when the frame is displayed with this dictionary as UserInfo. If UserInfo is nil,
* then no notification will be broadcasted. */
#property (nonatomic, readwrite, retain) NSDictionary *userInfo;
ob cit. Not tried, ymmv
I'm developing a game for iPhone using COCOS2D.
In that, I need to draw a line when user drag his finger from a point to another. As far as my knowledge is concern I need to do this in Touches Moved method from where I can get the points.
But I don't know how to do this. Can anybody help me on this?
Kia ora. Boredom compels me to provide an answer on this topic.
Layer part (i.e. #interface GetMyTouches : CCLayer):
-(void) ccTouchesMoved:(NSSet *)inappropriateTouches withEvent:(UIEvent *)event
{
UITouch *touchMyMinge = [inappropriateTouches anyObject];
CGPoint currentTouchArea = [touchMyMinge locationInView:[touchMyminge view] ];
CGPoint lastTouchArea = [touchMyMinge previousLocationInView:[touchMyMinge view]];
// flip belly up. no one likes being entered from behind.
currentTouchArea = [[CCDirector sharedDirector] convertToGL:currentTouchArea];
lastTouchArea = [[CCDirector sharedDirector] convertToGL:lastTouchArea];
// throw to console my inappropriate touches
NSLog(#"current x=%2f,y=%2f",currentTouchArea.x, currentTouchArea.y);
NSLog(#"last x=%2f,y=%2f",lastTouchArea.x, lastTouchArea.y);
// add my touches to the naughty touch array
naughtyTouchArray addObject:NSStringFromCGPoint(currentTouchArea)];
naughtyTouchArray addObject:NSStringFromCGPoint(lastTouchArea)];
}
Node part (i.e. #interface DrawMyTouch: CCNode) :
#implementation DrawMyTouch
-(id) init
{
if( (self=[super init]))
{ }
return self;
}
-(void)draw
{
glEnable(GL_LINE_SMOOTH);
for(int i = 0; i < [naughtyTouchArray count]; i+=2)
{
start = CGPointFromString([naughtyTouchArray objectAtIndex:i]);
end = CGPointFromString([naughtyTouchArray objectAtIndex:i+1]);
ccDrawLine(start, end);
}
}
#end
Layer part II (i.e. #interface GetMyTouches : CCLayer):
-(void) ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
DrawMyTouch *line = [DrawMyTouch node];
[self addChild: line];
}
Remember touching is easy. Knowing what you're doing while touching isn't rocket science.
Finally, if you don't understand anything i've posted ... take up baking. The world needs more chocolate cake producers.
Clarification:
No one learns from cut 'n paste ~ this code was never meant to work without caressing
If you fail to see the humour, you're in the wrong profession
Of note, I love a good chocolate cake. The world really does need more fantastic bakers. That's not an insult, that's encouragement.
"Look outside the square, to find the circle filled with the knowledge that makes life worth living" ~ Aenesidemus.
- (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *theTouch = [touches anyObject];
CGPoint touchLocation = [theTouch locationInView:[theTouch view] ];
cgfloat x = touchLocation.x;
cgfloat y= touchLocation.y;
printf("move x=%f,y=%f",x,y);
}
Try the above code. It will get the coordinate points when touches moved in iphone.
To draw line, use something like this:
-void draw
{
here is the code for line draw.
}
Update this function in update method.