I'm creating a space-themed game in SpriteKit and to simulate a starfield I'm using an GLSL fragment shader from ShaderToy, converted to work in SpriteKit.
To use it I simply init a clear SKSpriteNode of the same size as my scene (2048 x 1536) and apply the shader to this node.
let starField = SKSpriteNode(color: UIColor.clear, size: CGSize(width: 2048, height: 1536))
let shader = SKShader(fileNamed: "Starfield.fsh")
starField.shader = shader
The shader renders just fine and shows a nice starfield. So far, so good.
The problem occurs when I transition from a different scene using SKTransition. During the transition, the shader appears to be rasterising, and as soon as the transition is complete the whole thing flips upside down.
My transition code (doesn't appear to matter what transition or duration I use):
let gameScene = GameScene(level: level)
gameScene.scaleMode = self.scaleMode
let transition = SKTransition.fade(withDuration: 3.0)
transition.pausesIncomingScene = true
self.view?.presentScene(gameScene, transition:transition)
I've tried with a different shader and the same occurs - the starfield is one way up during the transition, and instantly 'flips' as soon as the previous scene has been cleared up. Has anyone experienced the same and knows what is going on?
I have a video of the problem, which you can see occurring at around 7 seconds:
https://youtu.be/l1lLv6MwKYU
The shader code is as follows:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co);
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main()
{
float size = 50.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(u_time + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.9 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(v_tex_coord) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * (0.25 * sin(u_time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
:
Related
I wrote an HLSL shader for my Monogame project that uses ambient lighting to create a day/night cycle.
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
sampler s0;
struct VertexShaderOutput
{
float4 Position : SV_POSITION;
float4 Color : COLOR0;
float2 TextureCoordinates : TEXCOORD0;
};
float ambient = 1.0f;
float percentThroughDay = 0.0f;
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.3;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float exposeRed = (1 + (.39 - input.TextureCoordinates.y) * 8); // overexpose red
float exposeGreen = (1 + (.39 - input.TextureCoordinates.y) * 2); // some extra green for the blue pixels
float exposeBlue = (1 + (.39 - input.TextureCoordinates.y) * 6); // some extra blue
// happens over full screen
if (input.TextureCoordinates.y < 1.0f) {
float redAdder = max(1, (exposeRed * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * (percentThroughDay/0.25f))); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f && percentThroughDay < 0.50f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.25f)/0.25f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
blueAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.25f)/0.25f)));
}
//mid day
else if (percentThroughDay >= 0.50f && percentThroughDay < 0.75f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
else if (percentThroughDay >= 0.75f && percentThroughDay < 0.85f) {
redAdder = max(1, (exposeRed * ((percentThroughDay - 0.75f)/0.10f)));
greenAdder = max(1, (exposeGreen * ((percentThroughDay - 0.75f)/0.10f)));
blueAdder = max(1, (exposeBlue * ((percentThroughDay - 0.75f)/0.10f)));
}
// begin reducing adders
else if (percentThroughDay >= 0.85f) {
redAdder = max(1, (exposeRed * (1-(percentThroughDay - 0.85f)/0.15f)));
greenAdder = max(1, (exposeGreen * (1-(percentThroughDay - 0.85f)/0.15f)));
blueAdder = max(1, (exposeBlue * (1-(percentThroughDay - 0.85f)/0.15f)));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
}
return outputColor;
}
technique ambientLightDayNight
{
pass P0
{
PixelShader = compile ps_2_0 MainPS();
}
};
This works how I want it to for the most part (it could definitely use some calculation optimization though).
However, I am now looking at adding spotlights in my game for the player to use. I followed along with this method which I got working independently of the ambientLight shader. It is a pretty simple shader that uses a lightMask.
sampler s0;
texture lightMask;
sampler lightSampler = sampler_state{Texture = lightMask;};
float4 PixelShaderLight(float2 coords: TEXCOORD0) : COLOR0
{
float4 color = tex2D(s0, coords);
float4 lightColor = tex2D(lightSampler, coords);
return color * lightColor;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderLight();
}
}
My problem is now using both of these shaders together. My current method is to draw my game scene to a render target, apply the ambient light shader, and then finish by drawing the gamescene (with the ambient light now) to the client screen while applying the spotlight shader.
This bring up multiple issues:
Applying the spotlight shader after the ambient light completely blacks out anything around the light, when in reality the area surrounding the light should be the ambient light.
The light intensity (how bright the light is) calculated in the spotlight shader is too dull when it is "night" because it is calculating the light color based on the ambient light shader's output.
I've tried to apply the ambient light shader after the spotlight shader instead, but this just renders most of everything black because the ambient light calculates against a mostly black background.
I've tried adding some code to the spotlight shader to color black pixels to white in order to reveal the ambient light background, however the light intensity is still being calculated against the darker ambient light - resulting in a very dull light.
Another thought was to just modify my ambient light shader to take the lightMask as a param and just not apply the ambient light to lights marked on the light mask. Then I could just use the spotlight shader to apply the graident of the light and modify the color. But I was unsure if I should be cramming these two seemingly separate light effects into one pixel shader. When I tried this, my shader also didn't compile because there were too many arithmetic ops.
So my questions for everyone are:
Should I avoid cramming multiple effects into one pixel shader?
Generally, how would I apply spot lighting over an ambient light effect that can be "dark"?
EDIT
my solution - Did not end up using the spot light shader, but still draw the light mask with the texture given in the article, then pass that light mask to this ambient light shader and offset the texture gradient.
float4 MainPS(VertexShaderOutput input) : COLOR
{
float4 constant = 1.5f;
float4 pixelColor = tex2D(s0, input.TextureCoordinates);
float4 outputColor = pixelColor;
// lighting intensity is gradient of pixel position
float Intensity = 1 + (1 - input.TextureCoordinates.y) * 1.05;
outputColor.r = outputColor.r / ambient * Intensity;
outputColor.g = outputColor.g / ambient * Intensity;
outputColor.b = outputColor.b / ambient * Intensity;
// sun set/rise blending
float gval = (1 - input.TextureCoordinates.y); // replace 1 with .39 to lock to 39 percent of screen (this is how it was before)
float exposeRed = (1 + gval * 8); // overexpose red
float exposeGreen = (1 + gval * 2); // some extra green
float exposeBlue = (1 + gval * 4); // some extra blue
float quarterDayPercent = (percentThroughDay/0.25f);
float redAdder = max(1, (exposeRed * quarterDayPercent)); // be at full exposure at 25% of day gone
float greenAdder = max(1, (exposeGreen * quarterDayPercent)); // be at full exposure at 25% of day gone
float blueAdder = max(1, (exposeBlue * quarterDayPercent)); // be at full exposure at 25% of day gone
// begin reducing adders
if (percentThroughDay >= 0.25f ) {
float gradientVal1 = (1-(percentThroughDay - 0.25f)/0.25f);
redAdder = max(1, (exposeRed * gradientVal1));
greenAdder = max(1, (exposeGreen * gradientVal1));
blueAdder = max(1, (exposeGreen * gradientVal1));
}
//mid day
if (percentThroughDay >= 0.50f) {
redAdder = 1;
greenAdder = 1;
blueAdder = 1;
}
// add adders back for sunset
if (percentThroughDay >= 0.75f) {
float gradientVal2 = ((percentThroughDay - 0.75f)/0.10f);
redAdder = max(1, (exposeRed * gradientVal2));
greenAdder = max(1, (exposeGreen * gradientVal2));
blueAdder = max(1, (exposeBlue * gradientVal2));
}
// begin reducing adders
if (percentThroughDay >= 0.85f) {
float gradientVal3 = (1-(percentThroughDay - 0.85f)/0.15f);
redAdder = max(1, (exposeRed * gradientVal3));
greenAdder = max(1, (exposeGreen * gradientVal3));
blueAdder = max(1, (exposeBlue * gradientVal3));
}
outputColor.r = outputColor.r * redAdder;
outputColor.g = outputColor.g * greenAdder;
outputColor.b = outputColor.b * blueAdder;
// first check if we are in a lightMask light
float4 lightMaskColor = tex2D(lightSampler, input.TextureCoordinates);
if (lightMaskColor.r != 0.0f || lightMaskColor.g != 0.0f || lightMaskColor.b != 0.0f)
{
// we are in the light so don't apply ambient light
return pixelColor * (lightMaskColor + outputColor) * constant; // have to offset by outputColor here because the lightMask is pure black
}
return outputColor * pixelColor * constant; // must multiply by pixelColor here to offset the lightMask bounds. TODO: could try to restore original color by removing this multiplaction and factoring in more of an offset on ln 91
}
To chain lights as you want, you need a different approach. As you already encountered, chaining lights solely on the color won't work, as once the color has become black it can't be highlighted anymore. To deal with multiple lights there are two typical approaches: forward shading and deferred shading. Each has its advantages and disadvantages, so you need to look which fits better your situation.
Forward Shading
This approach is the one you tested with stuffing all lighting computations in a single shading pass. You are adding all light intensities together to a final light intensity and then multiply it with the color.
Pros are the performance and simplicity, Cons are the limitation in the amount of lights and more complex shader code.
Deferred Shading
This approach decouples single lights from each other and can be used to draw scenes with very many lights. Each light needs the original scene color (albedo) to compute its part of the final image. Therefore you first render your scene without any lighting onto a texture (usually called color buffer or albedo buffer). Then you can render each light separately with multiplying it with the albedo and adding it to the final image. So even in the dark parts the original color comes back again with a light.
Pros are the cleaner structure and possibility to use a lot of lights, even with different shapes. Cons are the extra buffers and draw calls which have to be made.
I'm writing an iOS app that renders a pointcloud in SceneKit using a custom geometry. This post was super helpful in getting me there (though I translated this to Objective-C), as was David Rönnqvist's book 3D Graphics with SceneKit (see chapter on custom geometries). The code works fine, but I'd like to make the points render at a larger point size - at the moment the points are super tiny.
According to the OpenGL docs, you can do this by calling glPointSize(). From what I understand, SceneKit is built on top of OpenGL so I'm hoping there is a way to access this function or do the equivalent using SceneKit. Any suggestions would be much appreciated!
My code is below. I've also posted a small example app on bitbucket accessible here.
// set the number of points
NSUInteger numPoints = 10000;
// set the max distance points
int randomPosUL = 2;
int scaleFactor = 10000; // because I want decimal points
// but am getting random values using arc4random_uniform
PointcloudVertex pointcloudVertices[numPoints];
for (NSUInteger i = 0; i < numPoints; i++) {
PointcloudVertex vertex;
float x = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float y = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float z = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
vertex.x = (x - randomPosUL * scaleFactor) / scaleFactor;
vertex.y = (y - randomPosUL * scaleFactor) / scaleFactor;
vertex.z = (z - randomPosUL * scaleFactor) / scaleFactor;
vertex.r = arc4random_uniform(255) / 255.0;
vertex.g = arc4random_uniform(255) / 255.0;
vertex.b = arc4random_uniform(255) / 255.0;
pointcloudVertices[i] = vertex;
// NSLog(#"adding vertex #%lu with position - x: %.3f y: %.3f z: %.3f | color - r:%.3f g: %.3f b: %.3f",
// (long unsigned)i,
// vertex.x,
// vertex.y,
// vertex.z,
// vertex.r,
// vertex.g,
// vertex.b);
}
// convert array to point cloud data (position and color)
NSData *pointcloudData = [NSData dataWithBytes:&pointcloudVertices length:sizeof(pointcloudVertices)];
// create vertex source
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticVertex
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(PointcloudVertex)];
// create color source
SCNGeometrySource *colorSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticColor
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:sizeof(float) * 3
dataStride:sizeof(PointcloudVertex)];
// create element
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:nil
primitiveType:SCNGeometryPrimitiveTypePoint
primitiveCount:numPoints
bytesPerIndex:sizeof(int)];
// create geometry
SCNGeometry *pointcloudGeometry = [SCNGeometry geometryWithSources:#[ vertexSource, colorSource ] elements:#[ element]];
// add pointcloud to scene
SCNNode *pointcloudNode = [SCNNode nodeWithGeometry:pointcloudGeometry];
[self.myView.scene.rootNode addChildNode:pointcloudNode];
I was looking into rendering point clouds in ios myself and found a solution on twitter, by a "vade", and figured I post it here for others:
ProTip: SceneKit shader modifiers are useful:
mat.shaderModifiers = #{SCNShaderModifierEntryPointGeometry : #"gl_PointSize = 16.0;"};
I'm attempting to see what shaders look like in Interface Builder using sprite kit, and would like to use some of the shaders at ShaderToy. To do it, I created a "shader.fsh" file, a scene file, and added a color sprite to the scene, giving it a custom shader (shader.fsh)
While very basic shaders seem to work:
void main() {
gl_FragColor = vec4(0.0,1.0,0.0,1.0);
}
Any attempt I make to convert shaders from ShaderToy cause Xcode to freeze up (spinning color ball) as soon as the attempt is made to render them.
The shader I am working with for example, is this one:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float size = 30.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * fragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(iGlobalTime + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(fragCoord.xy, center) / (0.5 * size);
color = color * t / (abs(fragCoord.y - center.y)) * t / (abs(fragCoord.x - center.x));
}
else if (rand(fragCoord.xy / iResolution.xy) > 0.996)
{
float r = rand(fragCoord.xy);
color = r * (0.25 * sin(iGlobalTime * (r * 5.0) + 720.0 * r) + 0.75);
}
fragColor = vec4(vec3(color), 1.0);
}
I've tried:
Replacing mainImage() with main(void) (so that it will be called)
Replacing the iXxxxx variables (iGlobalTime, iResolution) and fragCoord variables with their related variables (based on the suggestions here)
Replacing some of the variables (iGlobalTime)...
While changing mainImage to main() and swapping out the variables got it to work without error in TinyShading realtime tester app - the outcome is always the same in Xcode (spinning ball, freeze). Any advice here would be helpful as there is a surprisingly small amount of information currently available on the topic.
I managed to get this working in SpriteKit using SKShader. I've been able to render every shader from ShaderToy that I've attempted so far. The only exception is that you must remove any code using iMouse, since there is no mouse in iOS. I did the following...
1) Change the mainImage function declaration in the ShaderToy to...
void main(void) {
...
}
The ShaderToy mainImage function has an input named fragCoord. In iOS, this is globally available as gl_FragCoord, so your main function no longer needs any inputs.
2) Do a replace all to change the following from their ShaderToy names to their iOS names...
fragCoord becomes gl_FragCoord
fragColor becomes gl_FragColor
iGlobalTime becomes u_time
Note: There are more that I haven't encountered yet. I'll update as I do
3) Providing iResolution is slightly more involved...
iResolution is the viewport size (in pixels), which translates to the sprite size in SpriteKit. This used to be available as u_sprite_size in iOS, but has been removed. Luckily, Apple provides a nice example of how to inject it into your shader using uniforms in their SKShader documentation.
However, as stated in Shader Inputs section of ShaderToy, the type of iResolution is vec3 (x, y and z) as opposed to u_sprite_size, which is vec2 (x and y). I am yet to see a single ShaderToy that uses the z value of iResolution. So, we can simply use a z value of zero. I modified the example in the Apple documentation to provide my shader an iResolution of type vec3 like so...
let uniformBasedShader = SKShader(fileNamed: "YourShader.fsh")
let sprite = SKSpriteNode()
sprite.shader = uniformBasedShader
let spriteSize = vector_float3(
Float(sprite.frame.size.width), // x
Float(sprite.frame.size.height), // y
Float(0.0) // z - never used
)
uniformBasedShader.uniforms = [
SKUniform(name: "iResolution", vectorFloat3: spriteSize)
]
That's it :)
Here is the change to the shader that works when loaded as a shader with swift:
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co);
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main()
{
float size = 50.0; //Item 1:
float prob = 0.95; //Item 2:
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(u_time + (starValue - prob) / (1.0 - prob) * 45.0); //Item 3:
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.9 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(v_tex_coord) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * (0.25 * sin(u_time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
Play with Item 1: to increase the number of stars in the sky the smaller the number the more stars I like the number to be around 50 not too dense
Item 2: changes the randomness or how close together the stars will appear 1 = none, 0.1 = side by side around 0.75 gives a nice feel.
Item 3 is where most of the magic happens this is the size and pulse of the stars.
float t = 0.9
Changing 0.9, will increase the initial star sign up or down a nice value is 1.4 not too big not too small.
float t = 0.9 + 0.2
Changing the second value in this equation 0.2, will increase the pulse effect width of the stars proportionally to the original size I like with 1.4 a value of 1.2.
To add the shader to your swift project add a sprite to the scene the size of the screen then add the shader like this:
let backgroundImage = SKSpriteNode()
backgroundImage.texture = textureAtlas.textureNamed("any )
backgroundImage.size = screenSize
let shader = SKShader(fileNamed: "nightSky.fsh")
backgroundImage.shader = shader
I am creating a game that has 3 layers of background. They are added to a CCParallaxNode and it's moved by tilting the device to the right, left, up and down. I am using this code to move the CCParallaxNode (accelerometer delegate method - didAccelerate):
void SelectScreen::didAccelerate(cocos2d::CCAcceleration *pAccelerationValue)
{
float deceleration = 0.1f, sensitivity = 30.0f, maxVelocity = 200;
accelX = pAccelerationValue->x * sensitivity;
accelY = pAccelerationValue->z * sensitivity;
parallaxMovementX = parallaxMovementX * deceleration + pAccelerationValue->x * sensitivity;
parallaxMovementX = fmaxf(fminf(parallaxMovementX, maxVelocity), -maxVelocity);
float offset = -calibration * sensitivity;
parallaxMovementY = (parallaxMovementY * deceleration + pAccelerationValue->z * sensitivity) + offset;
}
Then, in the update method:
void SelectScreen::update(float dt)
{
CCNode* node = getChildByTag(100);
float maxX = (Data::getInstance()->getWinSize().width * 2) + 100;
float minX = node->getContentSize().width - 100;
float maxY = Data::getInstance()->getWinSize().height * 0.1f;
float minY = -200;
float diffX = parallaxMovementX;
float diffY = parallaxMovementY;
float newX = node->getPositionX() + diffX;
float newY = node->getPositionY() + diffY;
newX = MIN(MAX(newX, minX), maxX);
newY = MIN(MAX(newY, minY), maxY);
if(isUpdating)
node->setPositionX(newX);
if(isUpdatingY)
node->setPositionY(newY);
}
The movement is nicely done, however, when reaching any of the 4 edges it stops abruptly. Also, when changing direction (eg. moving to the right then moving to the left) it does it abruptly.
Question: How can I do a smooth stop and a smooth direction change (maybe some little bouncing effect)? I think this is also related to the accelerometer data (when going fast it must bounce longer that it should when going slow).
Thanks in advance.
You need some math to smooth the movements.
Try checking the code here:
http://www.nscodecenter.com/preguntas/10768/3d-parallax-con-accelerometer
I am creating new sprites/bodies when touching the screen:
-(void) addNewSpriteAtPosition:(CGPoint)pos
{
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.position=[Helper toMeters:pos];
b2Body* body = world->CreateBody(&bodyDef);
b2CircleShape circle;
circle.m_radius = 30/PTM_RATIO;
// Define the dynamic body fixture.
b2FixtureDef fixtureDef;
fixtureDef.shape=&circle;
fixtureDef.density=0.7f;
fixtureDef.friction=0.3f;
fixtureDef.restitution = 0.5;
body-> CreateFixture(&fixtureDef);
PhysicsSprite* sprite = [PhysicsSprite spriteWithFile:#"circle.png"];
[self addChild:sprite];
[sprite setPhysicsBody:body];
body->SetUserData((__bridge void*)sprite);
}
Here is my positioning helper:
+(b2Vec2) toMeters:(CGPoint)point
{
return b2Vec2(point.x / PTM_RATIO, point.y / PTM_RATIO);
}
PhysicsSprite is the typical one used with Box2D, but I'll include the relevant method:
-(CGAffineTransform) nodeToParentTransform
{
b2Vec2 pos = physicsBody->GetPosition();
float x = pos.x * PTM_RATIO;
float y = pos.y * PTM_RATIO;
if (ignoreAnchorPointForPosition_)
{
x += anchorPointInPoints_.x;
y += anchorPointInPoints_.y;
}
float radians = physicsBody->GetAngle();
float c = cosf(radians);
float s = sinf(radians);
if (!CGPointEqualToPoint(anchorPointInPoints_, CGPointZero))
{
x += c * -anchorPointInPoints_.x + -s * -anchorPointInPoints_.y;
y += s * -anchorPointInPoints_.x + c * -anchorPointInPoints_.y;
}
self.position = CGPointMake(x, y);
// Rot, Translate Matrix
transform_ = CGAffineTransformMake(c, s, -s, c, x, y);
return transform_;
}
Now, I have two issues illustrated by the following two images, which show the debug draw with the sprites. Retina and non-retina versions:
Issue #1 - As you can see in both images, the further away from (0,0) the objects are, the sprite becomes further offset from the physics body.
Issue #2 - The red circle image files are 60x60 (retina), and the white circles are 30x30 (non-retina). Why are they sized differently on screen? Cocos2d should use points, not pixels, so shouldn't they be the same size on screen?
Instead of hardcoded size, use contentSize.
circle.m_radius = sprite.contentSize.width*0.5f/PTM_RATIO;
This works in all SD and Retina mode.
You can use this style to sync between box2d and cocos2d positions:
body->SetTransform([self toB2Meters:sprite.position], 0.0f); //box2d<---cocos2d
//OR
sprite.position = ccp(body->GetPosition().x * PTM_RATIO,
body->GetPosition().y * PTM_RATIO); //box2d--->cocos2d