I'm trying to add a "whiteboard", so that people can draw lines on it.
The only problem is, if I draw very fast, it spaces the sprites pretty far away, so it's barely even legible if they are trying to draw letters or numbers. There's a ton of space between the different sprites.
Here's my method where most of the drawing is happening I think.
-(void) update:(ccTime)delta
{
CCDirector* director = [CCDirector sharedDirector];
CCRenderTexture* rtx = (CCRenderTexture*)[self getChildByTag:1];
// explicitly don't clear the rendertexture
[rtx begin];
for (UITouch* touch in touches)
{
CGPoint touchLocation = [director convertToGL:[touch locationInView:director.openGLView]];
touchLocation = [rtx.sprite convertToNodeSpace:touchLocation];
// because the rendertexture sprite is flipped along its Y axis the Y coordinate must be flipped:
touchLocation.y = rtx.sprite.contentSize.height - touchLocation.y;
CCSprite* sprite = [[CCSprite alloc] initWithFile:#"Cube_Ones.png"];
sprite.position = touchLocation;
sprite.scale = 0.1f;
[self addChild:sprite];
[placedSprites addObject:sprite];
}
[rtx end];
}
Maybe this is the cause?
[self scheduleUpdate];
I'm not entirely sure how to decrease the time between updates though.
Thanks in advance
The problem is simply that the user can move the touch location (ie his/her finger) a great distance between two touch events. You may receive one event at 100x100 and the next the finger is already at 300x300. There's nothing you can do about that.
You can however assume that the change between two touch locations is a linear move. That means you can simply split any two touches that are farther apart than, say, 10 pixels distance and split them in 10 pixel distance intervals. So you'd actually generate the in-between touch locations yourself.
If you do that, it's a good idea to limit the minimum distance between two touches, otherwise the user could draw lots and lots of sprites in a very small area, which is not what you want. So you would only draw a new sprite if the new touch location is, say, 5 pixels away from the previous one.
Related
I am creating a simple Sprite Kit game however when i am adding the PhysicsBody to one of my sprites it seems to be going in the wrong position. i know that it is in the wrong position as i have have set
skView.showsPhysics = YES;
and it is showing up in the wrong position.
The Square in the bottom corner is the physics body for the first semicircle. I am using a square at the moment just for testing purposes.
My app includes view following and follows my main sprite when it moves. I implemented this by following apples documentation and creating a 'myworld' node and creating all other nodes from that node.
myWorld = [SKNode node];
[self addChild:myWorld];
semicircle = [SKSpriteNode spriteNodeWithImageNamed:#"SEMICRICLE.png"];
semicircle.size = CGSizeMake(semicircle.frame.size.width/10, semicircle.frame.size.height/10);
semicircle.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:semicircle.frame.size];
semicircle.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2);
semicircle.physicsBody.dynamic = YES;
semicircle.physicsBody.collisionBitMask = 0;
semicircle.name = #"semicircle";
[myWorld addChild:semicircle];
To centre on the node I call these methods
- (void)didSimulatePhysics
{
[self centerOnNode: [self childNodeWithName: #"//mainball"]];
}
- (void) centerOnNode: (SKNode *) node
{
CGPoint cameraPositionInScene = [node.scene convertPoint:node.position fromNode:node.parent];
node.parent.position = CGPointMake(node.parent.position.x - cameraPositionInScene.x, node.parent.position.y - cameraPositionInScene.y);
}
I don't know if the my world thing makes any difference to the SkPhysics body...
SKPhysicsBody starts at coordinates 0,0 which is at the bottom left hand corner. If you make the area smaller, as you did by width/10 and height/10, you decrease the size but from the bottom left.
I think what you are looking for is bodyWithRectangleOfSize:center: which allows you to manually set the center from which you base your physics body area on.
Update:
Based on what I understand, your smallest semi circle pic size is the same as the screen size. I would suggest you modify the image size to something like the example I have. You can then set the sprite's position as required and set the physics body to the half of the image containing your semi circle.
Your centerOnNode call should be put in the didEvaluateActions function instead of the didSimulatePhysics function. This is because you need to move the world before the physics are drawn so that they stay in sync. Similar question found here: https://stackoverflow.com/a/24804793/5062806
I am trying to create a conveyor belt effect using SpriteKit like so
MY first reflex would be to create a conveyor belt image bigger than the screen and then move it repeatedly forever with actions. But this does not seem ok because it is dependent on the screen size.
Is there any better way to do this ?
Also obviously I want to put things (which would move independently) on the conveyor belt so the node is an SKNode with a the child sprite node that is moving.
Update : I would like the conveyor belt to move "visually"; so the lines move in a direction giving the impression of movement.
Apply physicsBody to all those sprites which you need to move on the conveyor belt and set the affectedByGravity property as NO.
In this example, I am assuming that the spriteNode representing your conveyor belt is called conveyor. Also, all the sprite nodes which need to be moved are have the string "moveable" as their name property.
Then, in your -update: method,
-(void)update:(CFTimeInterval)currentTime
{
[self enumerateChildNodesWithName:#"moveable" usingBlock:^(SKNode *node, BOOL *stop{
if ([node intersectsNode:conveyor])
{
[node.physicsBody applyForce:CGVectorMake(-1, 0)];
//edit the vector to get the right force. This will be too fast.
}
}];
}
After this, just add the desired sprites on the correct positions and you will see them moving by themselves.
For the animation, it would be better to use an array of textures which you can loop on the sprite.
Alternatively, you can add and remove a series of small sprites with a sectional image and move them like you do the sprites which are travelling on the conveyor.
#akashg has pointed out a solution for moving objects across the conveyor belt, I am giving my answer as how to make the conveyor belt look as if it is moving
One suggestion and my initial intuition was to place a larger rectangle than the screen on the scene and move this repeatedly. Upon reflecting I think this is not a nice solution because if we would want to place a conveyor belt on the middle, in a way we see both it ends this would not be possible without an extra clipping mask.
The ideal solution would be to tile the SKTexture on the SKSpriteNode and just offset this texture; but this does not seem to be possible with Sprite Kit (no tile mechanisms).
So basically what I'm doing is creating subtextures from a texture that is like so [tile][tile](2 times a repeatable tile) and I just show these subtextures one after the other to create an animation.
Here is the code :
- (SKSpriteNode *) newConveyor
{
SKTexture *conveyorTexture = [SKTexture textureWithImageNamed:#"testTextureDouble"];
SKTexture *halfConveyorTexture = [SKTexture textureWithRect:CGRectMake(0.5, 0.0, 0.5, 1.0) inTexture:conveyorTexture];
SKSpriteNode *conveyor = [SKSpriteNode spriteNodeWithTexture:halfConveyorTexture size:CGSizeMake(conveyorTexture.size.width/2, conveyorTexture.size.height)];
NSArray *textureArray = [self horizontalTextureArrayForTxture:conveyorTexture];
SKAction *moveAction = [SKAction animateWithTextures:textureArray timePerFrame:0.01 resize:NO restore:YES];
[conveyor runAction:[SKAction repeatActionForever:moveAction]];
return conveyor;
}
- (NSArray *) horizontalTextureArrayForTxture : (SKTexture *) texture
{
CGFloat deltaOnePixel = 1.0 / texture.size.width;
int countSubtextures = texture.size.width / 2;
NSMutableArray *textureArray = [[NSMutableArray alloc] initWithCapacity:countSubtextures];
CGFloat offset = 0;
for (int i = 0; i < countSubtextures; i++)
{
offset = i * deltaOnePixel;
SKTexture *subTexture = [SKTexture textureWithRect:CGRectMake(offset, 0.0, 0.5, 1.0) inTexture:texture];
[textureArray addObject:subTexture];
}
return [NSArray arrayWithArray:textureArray];
}
Now this is still not ideal because it is necessary to make an image with 2 tiles manually. We can also edit a SKTexture with a CIFilter transform that could potentially be used to create this texture with 2 tiles.
Apart from this I think this solution is better because it does not depend on the size of the screen and is memory efficient; but in order for it to be used on the whole screen I would have to create more SKSpriteNode objects that share the same moveAction that I have used, since tiling is not possible with Sprite Kit according to this source :
How do you set a texture to tile in Sprite Kit.
I will try to update the code to make it possible to tile by using multiple SKSpriteNode objects.
I have a node which is scaled and moved every frame.
The node has a custom draw function, so that only the visible part of that node is drawn each frame.
To determine which part is visible, I need to call:
CGPoint start = [MyNode convertToNodeSpace:_adjustedStart];
CGPoint finish = [MyNode convertToNodeSpace:_adjustedFinish];
where:
_adjustedStart = CGPointZero;
_adjustedFinish = CGPointMake(_winSize.width, 0);
start.x and finish.x are used by my draw method, to determine the width to draw.
Prior to using these methods, I had 60fps, even though sometimes I would draw much more than necessary. After using these methods, I draw exactly the region necessary, but the framerate sometimes drops to 50 (for an instant), making the graphics choppy.
How can I perform the same calculation as convertToNodeSpace / convertToWorldSpace but faster?
Don't know if I'm over-thinking this or not.. but I'm trying to be able to adjust the sprites showing when my character is moving in different directions.
For example, if the players finger is above the character I want it to go to the 'moveUp' frame. If the players' finger is below the character I want to go to the 'moveDown' frame, otherwise stay at the 'normalState' frame.
Can someone show me an example of this? Or direct me to a good general tutorial about implementing Sprite sheets/Sprites in this sort of way.
I have gone through and used sprite sheets in demo projects but am looking to release this and want to approach it the proper and most successful way.
Thanks!!
I assume you have already made your sprite sheet and/or packed it (I like TexturePacker). The code would be something like:
...init...
//Place all sprite frames from sprite sheet into cache
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"spriteSheet.png"];
CCSpriteBatchNode *gameBatchNode = [CCSpriteBatchNode batchNodeWithFile:#"spriteSheet.png"];
CCSprite *player = [CCSprite spriteWithSpriteFrameName:#"moveUp"];
[gameBatchNode addChild: player];
....
- (void) ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *aTouch = [touches anyObject];
CGPoint touchPosition = [player.parent convertTouchToNodeSpace:aTouch];
CGPoint touchPositionRelativeToPlayer = ccpSub(touchPosition, player.position);
if(touchPositionRelativeToPlayer.y > 0)
[player setDisplayFrame:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"moveUp"]];
else
[player setDisplayFrame:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"moveDown"]];
}
If you want other directions (W, E, NW, etc..) I would suggest you convert the touchPositionRelativeToPlayer to an angle using atan2 and determine the frame from that.
i am creating a project in which i have random Box2d bodies. Now i am drawing a line on basis of TouchesMoved by user in the DRAW method. i need to use the RayCasting method of Box2d to check for intersection between that line and the Box2D bodies.
i am using the following code for it in my Draw method
for(int i = 0; i < [pointTouches count]; i+=2)
{
CGPoint startPoint = CGPointFromString([pointTouches objectAtIndex:i]);
CGPoint endPoint = CGPointFromString([pointTouches objectAtIndex:i+1]);
ccDrawLine(startPoint, endPoint);
b2Vec2 start=[self toMeters:startPoint];
b2Vec2 end=[self toMeters:endPoint];
[self checkIntersectionbtw:start:end];
}
-(void)checkIntersectionbtw:(b2Vec2)point1:(b2Vec2)point2
{
RaysCastCallback callback;
world->RayCast(&callback, point1,point2);
if (callback.m_fixture)
{
NSLog(#"intersected");
checkPoint = true;
}
}
-(void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *myTouch = [touches anyObject];
CGPoint currentTouchArea = [myTouch locationInView:[myTouch view]];
CGPoint lastTouchArea = [myTouch previousLocationInView:[myTouch view]];
currentTouchArea = [[CCDirector sharedDirector] convertToGL:currentTouchArea];
lastTouchArea = [[CCDirector sharedDirector] convertToGL:lastTouchArea];
[pointTouches addObject:NSStringFromCGPoint(currentTouchArea)];
[pointTouches addObject:NSStringFromCGPoint(lastTouchArea)];
}
but the callback only tells the intersection when the line drawn completely passes the bodies. when user starts from some point outside and leaves the point inside the box2d body the callback doesn't say the line intersected. what am i possibly doing wrong??
Are you drawing a line or a curve? For a line, I assume that you only use the first point and the last point detected. For a curve, you use all the points detected to form the curve that you are drawing.
If you are drawing a curve then think I understand the problem. You are calculating the intersection line using the consecutive points detected by the touch system. These consecutive points are very close to each other which makes very small ray casts, and what might be happening is that you might be casting the line with a starting and ending point inside the balls, which might issue a negative collision.
If the objective is to detect if you are touching the balls, I suggest using a sensor maintained under the touch, and then checking a collision with this code in your update method:
for (b2ContactEdge* ce = sensorbody->GetContactList(); ce; ce = ce->next)
{
b2Contact* c = ce->contact;
if(c->IsTouching())
{
const b2Body* bodyA = c->GetFixtureA()->GetBody();
const b2Body* bodyB = c->GetFixtureB()->GetBody();
const b2Body* ballBody = (bodyA == sensorbody)?bodyB:bodyA;
...
}
}
If you really want to use a raycast, then I suggest saving a few consecutive points and make a vector with them, to avoid the small ray cast.
edit: Sorry, The example I wrote is in C++, but you should be able to find an equivalent for Objective-C.
Box2D's raycasts ignore any 'back' facing edges they hit, so if the ray starts inside a fixture, that fixture will be ignored. Simplest thing to do is cast the same ray in both directions.