On SKCropNode class reference, some examples to specify a mask are given.
Here they are:
This means a crop node can use simple masks derived from a piece of artwork, but it can also use more sophisticated masks. For example, here are some ways you might specify a mask:
An untextured sprite that limits content to a rectangular portion of the scene.
A textured sprite is a precise per-pixel mask. But consider also the benefits of a nonuniformly scaled texture. You could use a nonuniformly scaled texture to create a mask for a resizable user-interface element (such as a health bar) and then fill the masked area with dynamic content.
A collection of nodes can dynamically generate a complex mask that changes each time the frame is rendered.
The second example introduce nonuniformly scaled texture: what's the meaning of this?
This does not help me to understand this second example!
A non-uniformly scaled texture is a texture that is applied to a sprite with xScale != yScale.
Related
The Issue
I've set up a minimal SceneKit project with a scene that contains the default airplane with a transparent plane that acts as a shadow receiver. I've duplicated this setup so there are two airplanes and two transparent shadow planes.
There is a directional light that cast shadows and has its shadowMode property set to .deferred. When the two shadow planes overlap, the plane that is closer to the camera 'cuts out' the shadow on the plane that is further away from the camera.
I know this is due to the fact that the plane's material has its .writesToDepthBuffer property set to true. However, without this the deferred shadows don't work.
The Question
Is there a way to show shadows on multiple overlapping planes? I know I can use SCNFloor to show multiple shadows but I specifically want shadows on multiple planes with a different Y position. Think of a scenario in ARKit where multiple planes are detected.
The Code
I've set up a minimal project on GitHub here.
Making both Y values of shadow planes closer enough will solve the cutoff issue.
In SceneKit it's a regular behaviour of two different planes that have a shadow projections. For getting a robust shadows use just one 3d object (plane or custom-shape geometry if you need different floor levels) as a shadow catcher.
If you have several 3D objects with Writes depth option turned On use Rendering order properties for each object. Nodes with greater rendering orders are rendered last. Default value of Rendering order is zero.
For instance:
geoNodeOne.renderingOrder = -1 /* Rendered first */
geoNodeTwo.renderingOrder = 50 /* Rendered last */
But in your case Rendering order property is useless because one shadow-projected plane blocks the other one.
To model a custom-shape geometry use Extrude Tool in 3D modelling app (like Maya or 3dsMax):
I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.
I develop a 2D match3 game in XNA. The core logic and animations are done. I use RenderTarget2D to draw the entire board. The board has 8 rows and 8 columns with 64x64 textures (the tiles), which could be clicked and moved. To capture the mouse intersection, I use SourceRectangles for each tile. Of course the SourceRectangles have same size as textures - 64x64.
I would like to scale down the entire board, using the RenderTarget2D, to support different monitor resolutions and aspects. First I draw all tiles in the RenderTarget2D. Then I scale down the RenderTarget2D with a float coefficient. Finally I draw the RenderTarget2D on the screen. As a result the entire board is scaled down properly (all textures are scaled down from 64x64 to 50x50 for example), but the SourceRectagles are not scaled, they remain 64x64 and mouse intersections are not captured for the proper tiles.
Why scaling the RenderTarget2D doesn't handle this? How I can solve this problem?
You should approach this problem differently. Your source rectangles for textures are just that — don't try to use them as button rectangles, or you will get in trouble like this.
Instead, use a different Rectangle hitboxRectangle, which will be the same size as your source rectangle initially, but will scale with your game window, and check intersections against it.
I have a texture with 250px width and 2000px height. 250x250 part of it drawn on screen according to various conditions (some kind of sprite sheet, yes). All I want is to draw it within a fixed destination rectangle with some rotation. Is it possible?
Yes. Here's how to effectively rotate your destination rectangle:
Take a look at the overloads for SpriteBatch.Draw.
Notice that none of the overloads that take a Rectangle as a destination take a rotation parameter. It's because such a thing does not make much sense. It's ambiguous as to how you want the destination rotated.
But you can achieve the same effect as setting a destination rectangle by careful use of the position and scale parameters. Combine these with the origin (centroid of scaling and rotation, specified in pixels in relation to your sourceRectangle) and rotation parameters to achieve the effect you want.
(If, on the other hand, you want to "fit" to a rectangle - effectively scaling after rotating - you would have to also use the transformMatrix parameter to Begin.)
Now - your question isn't quite clear on this point: But if the effect you are after is more like rotating your source rectangle, this is not something you can achieve with plain ol' SpriteBatch.
The quick-and-dirty way to achieve this is to set a viewport that acts as your destination rectangle. Then draw your rotated sprite within it. Note that SpriteBatch's coordinate system is based on the viewport, not the screen.
The "nicer" (but much harder to implement) way to do it would be to not use SpriteBatch at all, but implement your own sprite drawing that will allow you to rotate the texture coordinates.
What is the purpose of the source rectangle parameter in the SpriteBatch.Draw() method?
MSDN says: A rectangle that specifies (in texels) the source texels from a texture. Use null to draw the entire texture.
What does that mean?
The idea of the sourceRectangle is to allow you to implement what is both a performance optimisation and an artist convenience by arranging multiple sprites into a single texture. This is known as a "Texture Atlas" or a "Sprite Sheet".
(source: andrewrussell.net)
I explain why it is a performance optimisation in this answer. Basically it lets you reduce the number of texture-swaps. (So in the case of my illustration, if you're only drawing an animated character once, using a sprite-sheet will not improve performance.)
It also lets you implement tacky 2D special effects, like having a sprite "wipe" in:
(source: andrewrussell.net)
A texel is more-or-less the same thing as a pixel in the texture (a "texture pixel", if you will). So, when you draw your sprite, you specify the top-left corner of your sprite within the texture, along with its width and height. (The same as if you selected it in an image editor.)
If you pass in null for your source rectangle, XNA will assume a source rectangle that covers the entire texture.
The origin you specify to Draw is also measured in texels from the upper-left corner of the source rectangle.
In a situation where you have a single texture that contains different frames (animated textures), you will want to specify the source rectangle, so that you can draw a single frame from a texture.
i.e.
Look at this spritesheet here
The source rectangle defines the area of the texture that will be displayed. So if you have a 40x40 texture, and your rectangle is (0, 0, 20, 20), only the top left corner of the texture will be displayed. If you specify null for the rectangle, you will draw the entire texture.
This can be helpful when drawing from a spritesheet (a collection of textures that are all put into one bigger texture), and also in image manipulation programs.