I'm working in Swift 3.1. I want to create a shadow for a SKSpriteNode. I'm thinking of doing a SKShapeNode from my SKSpriteNode and giving it some glow and alpha.
But how can I convert SKSpriteNode into SKShapeNode?
If your plan is to make a shadow that looks like your sprite, then using another SpriteNode instead would be advisable. SKShapeNodes have poor performance and the process of converting the sprite into a vector shape will be processing heavy.
You should have read about SKEffectNode because you could use them to create a SpriteNode with your current texture and apply some effects on it to turn your texture black, blured ans maybe even use affine transformation to distort and rotate the texture so it looks like a shadow.
To make things more GPU/CPU efficient in the case where you have many shadows, you could simply put all the shadow sprites as child of a single SKEffectNode that will apply the effects on every shadow sprite all at once. I assume that positionning the shadow under the main sprite would not be a challenge for you.
Having the shadow as a child of the main sprite would force you to put an effect node for every sprite casting a shadow, and would cause the effects to be processed over and over. By putting every shadow as a child of the same node, they will get rendered all at once and the effect will be processed only once every frame.
I hope it helps!
SKShapeNode's are known to have poor performance in SpriteKit. Instead, you should add another SKSpriteNode for the shadow, and add it as a child to the main SKSpriteNode.
If you want the shadow to appear below its parent, you may have to adjust it's relative z-position as well.
Related
The Issue
I've set up a minimal SceneKit project with a scene that contains the default airplane with a transparent plane that acts as a shadow receiver. I've duplicated this setup so there are two airplanes and two transparent shadow planes.
There is a directional light that cast shadows and has its shadowMode property set to .deferred. When the two shadow planes overlap, the plane that is closer to the camera 'cuts out' the shadow on the plane that is further away from the camera.
I know this is due to the fact that the plane's material has its .writesToDepthBuffer property set to true. However, without this the deferred shadows don't work.
The Question
Is there a way to show shadows on multiple overlapping planes? I know I can use SCNFloor to show multiple shadows but I specifically want shadows on multiple planes with a different Y position. Think of a scenario in ARKit where multiple planes are detected.
The Code
I've set up a minimal project on GitHub here.
Making both Y values of shadow planes closer enough will solve the cutoff issue.
In SceneKit it's a regular behaviour of two different planes that have a shadow projections. For getting a robust shadows use just one 3d object (plane or custom-shape geometry if you need different floor levels) as a shadow catcher.
If you have several 3D objects with Writes depth option turned On use Rendering order properties for each object. Nodes with greater rendering orders are rendered last. Default value of Rendering order is zero.
For instance:
geoNodeOne.renderingOrder = -1 /* Rendered first */
geoNodeTwo.renderingOrder = 50 /* Rendered last */
But in your case Rendering order property is useless because one shadow-projected plane blocks the other one.
To model a custom-shape geometry use Extrude Tool in 3D modelling app (like Maya or 3dsMax):
I want to make a drawing app by using SKEmitterNode effect.
I know that SKShapeNode are used to stroke or fill a CGPath and I can use SKShapeNode to do it,
But what I want is using SKEmitterNode effect like following images to do the same work.
Is it possible ?
Yes, and this is a very good use of SKEmitterNodes.
First, you'll need to use Additive blend mode on your particles to get this effect.
And you should then set the pace (motion) of the particles to 0 in all directions, and not have them impacted by gravity. When they're emitted, they should be stationary.
The particle emitter should move with the touch of the artist/user, and emit at a rate slightly inverse to the rate of change in position of the emitter.
I am using SpriteKit, does SKLightNode can only work with SKSpriteNode? because there is no shadowCastBitMask property on SKShapeNode, I cannot make SKLightNode work with SKShapeNode. Is there a way can make them work together? Thanks!
SKShapeNode does not have lighting properties like SKSpriteNode. If you create a SKShapeNode you can make it disappear into a shadow but you cannot make it cast a shadow or set any other lighting properties.
I am using Apple's Sprite Kit to create a game, and currently I have drawn an SKShapeNode object that looks similar to the following:
My actual sprite has rounded edges, but for simplicity, I am trying to get everything working with this shape first. I am able to draw the sprite perfectly fine. However, I am having trouble creating a physics body to cover the sprite. I want the sprite to recognize collisions anywhere along its surface, so I want a physics body that covers only the dark gray area. Also, I want the sprite to respond to forces and impulses, so I need a volume-based physics body.
My first idea was to simply use the same CGPathRef that I used to create the image for the sprite, and pass it to the bodyWithPolygonFromPath method to create a corresponding physics body. I have come to find this will not work, as my shape is not convex.
So, now I am thinking I should be able to use bodyWithBodies to create a physics body shaped appropriately. I want to create three rectangular physics bodies (one to cover the left of the gray area, one to cover the top, and one to cover the right), and use those to create one physics body whose area covers the gray area of my sprite by using this:
mySprite.physicsbody = [SKPhysicsBody bodyWithBodies:#[body1, body2, body3]];
However, the main problem I am having is correctly positioning the individual bodies that go into the array, which is passed to the bodyWithBodies method. By using the above method, I obviously have no way of controlling where these physics bodies are being placed in relation to each other. The result is all three physics bodies stacking on top of each other in the center of mySprite.
How exactly are you supposed to position the physics bodies in the array relative to each other before passing them to the bodyWithBodies method?
I have a game object that manages several sprite objects. Each of the sprites overlap each other a bit, and drawing them looks just fine when they are at 100% opacity. If I set their opacity to say, 50% that is when it all goes to pot because any overlapping area is not 50% opaque due to the multiple layers.
EDIT: Ooops! For some reason I thought that I couldn't upload images. Anyway....
http://postimage.org/image/2fhcmn6s/ --> Here it is. Guess I need more rep for proper inclusion.
From left to right:
1. Multiple sprites, 100% opacity. Great!
2. Both are 50%, but notice how the overlap region distinguishes them as two sprites.
3. This is the desired behavior. They are 50% opaque, but in terms of the composite image.
What is the best way to mitigate this problem? Is a render target a good idea? What if I have hundreds of these 'multi-sprites'?
Hope this makes sense. Thanks!
Method 1:
If you care about the individual opacity of each sprite, then render the image on the background to a rendertarget texture of the same size using 50% or whatever opacity you want the sprite to have against the background. Then draw this rendertarget with 100% opacity.
In this way, all sprites will be blended against the background only, and other sprites will be ignored.
Method 2:
If you don't care about setting the individual opacity of each sprite, then you can just draw all sprites with 100% opacity to a rendertarget. Then draw that render target over your background at 50% opacity.
Performance concerns:
I mentioned two examples of drawing to rendertargets, each for a different effect.
Method 1:
You want to be able to specify a different opacity for each sprite.
If so, you need to render every sprite to a rendertarget and then draw that rendertarget texture to the final texture. Effectively, this is the same cost as drawing twice as many sprites as you need. In this case, that's 400 draw calls, which can be very expensive.
If you batch the calls though, and use a single large rendertarget for all of the sprites, you might get away with just 2 draw calls (depending on how big your sprites are, and the max size of a texture).
Method 2:
You don't need different opacity per each sprite.
In this case you can almost certainly get away with just 2 draw calls, regardless of sprite size.
Just batch all draw calls of the sprites (with 100% opacity) to draw to a rendertarget. That's one draw call.
Now draw that rendertarget on top of your background image with the desired opacity (e.g. 50% opacity), and all sprites will have this opacity.
This case is easier to implement.
The first thing your example images reminded me of is the "depth-buffer and translucent surfaces" problem.
In a 3D game you must sort your translucent surfaces from back-to-front and draw them only after you have rendered the rest of your scene - all with depth reading and writing turned on. If you don't do this you end up with your 3rd image, when you normally want your 2nd image with the glass being translucent over the top of what is behind it.
But you want the 3rd image - with some transparent surfaces obscuring other ones - so you could just deliberately cause this depth problem!
To do this you need to turn on depth reads and writes and set your depth function so that a second sprite drawn at the same depth as a previously drawn sprite does not render.
To achieve this in XNA 4.0 you need to pass, to SpriteBatch.Begin, a DepthStencilState with its DepthBufferFunction set to CompareFunction.Less (by default it is less-than-or-equal-to) and DepthBufferEnable and DepthBufferWriteEnable set to true.
There may be interactions with the sprite's layerDepth parameter (I cannot remember how it maps to depth by default).
You may also need to use BasicEffect as your shader for your sprite batch - specifically so you can set a projection matrix with appropriate near and far planes. This article explains how to do that. And you may also need to clear your depth buffer before hand.
Finally, you need to draw your sprites in the correct order - with the unobscured sprite first.
I am not entirely sure if this will work and if it will work reliably (perhaps you will get some kind of depth fighting issue, I am not sure). But I think it's worth a try, given that you can leave your rendering code essentially normal and just adjust your render state.
You should try the stuff in Andrew's answer first, but if that doesn't work, you could still render all of the sprites (assuming they all have the same opacity) onto a RenderTarget(2D) with 100% opacity, and then render that RenderTarget to the screen with 50%.
Something like this in XNA 4.0:
RenderTarget2D rt = new RenderTarget2D(graphicsDevice,
graphicsDevice.PresentationParameters.BackBufferWidth,
graphicsDevice.PresentationParameters.BackBufferHeight);
GraphicsDevice.SetRenderTarget(rt);
//Draw sprites
GraphicsDevice.SetRenderTarget(null);
//Then draw rt (also a Texture2D) with 50% opacity. For example:
spriteBatch.Begin();
spriteBatch.Draw(rt, Vector2.Zero, Color.FromArgb(128, Color.White));
spriteBatch.End();