Background
I want to move a SCNNode and a UIView containing an image synchronously. The UIView with the UIImageView are positioned on the Node, so that it looks like they are the texture of the SCNNode (a cube).
Code
let move:Float = 15
let moveCube: SCNAction = SCNAction.moveTo(SCNVector3Make(cube.position.x-move, cube.position.y, cube.position.z), duration: 1)
What I tried / How I do it right now
I animate the UIView using:
var move:Float = 15
var projMove = scnView.projectPoint(SCNVector3(x: move, y: 0, z: 0)) //converts 3D coordSystem into 2D
UIView.animateWithDuration(1, delay: 0, options: nil | UIViewAnimationOptions.CurveEaseOut, animations: {
self.myView.center.x = CGFloat(-projMove.x)
}, completion: { finished in
})
This works, the cube moves to the left, the UIView as well.
But this code is not really the best solution I think.
Question(s)
Is there a better way to let the cube move left, including the UIView?
Again, I want to move it both at the same time, best would be with one code segment
Can I possibly set one surface's (the front e.g.) texture to the image instead?
I want to set the image as only one side's texture
Could I even set the overall texture to an image and then put the image above it using it's alpha channel?
adding up to #2, I would like to set the cube's texture to a color and above that color I want to project the image (it has alpha layers so the color should still be viewable.)
Thanks in advance :)
Is there a better way to let the cube move left, including the UIView?
Yes. You can unproject the coordinates of the view and use this as a reference for the movement of the cube. See Unproject Point
Can I possibly set one surface's (the front e.g.) texture to the image instead?
Yes, simply set the diffuse channel of the material of the cube to your UIImage.
Could I even set the overall texture to an image and then put the image above it using it's alpha channel?
Maybe, I am not quite sure what you are talking about, would you mind expanding on that? If I understand a little bit Spritekit would be your best bet.
Here are updated answers for the comments:
I do use the unprojected points before projecting them in the SCNAction. And I meant more like moving both at once instead of a separate animation for each.
I don't think there is. You could animate a third party property and change its setter to change both the view and the node. You can also use blocks but in the end you cannot link the two directly.
Well, I want to set the image only to one side of the cube.
You can simply provide an array of 6 materials, one being your image and the 5 other ones a second material to fill. You'll need to play with the order to find where the image needs to be in the array.
That relates to #2, I want to set the texture to a color and then set one side's texture to the image I want to use.
There are two ways for this. You can use a shader that will add your image on top of a solid color, or you can make a second cube that is slightly smaller (less than 1%), and make that cube the background color you want. Then, use a transparency image on the larger one.
Related
I'm currently working on an ARKit project where I would like to darken the actual camera feed so the objects in my 3D scene stand out more.
2 Solutions I found so far:
A) Manually applying CIFilter to the camera frames and setting those as background image to the SceneKit scene as answered in this SO post
The problem here is that fps tanks significantly.
B) Set a background color like so:
sceneView.scene.background.contents = UIColor(white: 0.0, alpha: 0.2)
Sadly, colors with alpha <1 are still opaque, so no matter what alpha I set I can't see anything of the camera feed.
Can anyone think of a different trick to darken the camera feed?
Your option B doesn't work for two reasons:
the scene view is opaque, so there's nothing behind it for a partially transparent background color to blend with.
sceneView.scene.background is what actually displays the camera image, so if you set it to a color you're not displaying the camera image at all anymore.
Some other options (mostly untested) you might look into:
As referenced from the answer you linked, use SCNTechnique to set up multipass rendering. On the first pass, render the whole scene with the excludeCategoryMask (and your scene contents) set up to render nothing but the background, using a shader that dims (or blurs or whatever) all pixels. On the second pass, render only the node(s) you want to appear without that shader (use a simple pass-through shader).
Keep your option B, but make the scene view non-opaque. Set the view's backgroundColor (not the scene's background) to a solid color, and set the transparency of the scene background to fade out the camera feed against that color.
Use geometry to create a "physical" backdrop for your scene — e.g. an SCNPlane of large size, placed as a child of the camera at some fixed distance that's much farther than any other scene content. Set the plane's background color and transparency.
Use multiple views — the ARSCNView, made non-opaque and with a clear background, and another (not necessarily an SceneKit view) that just shows the camera feed. Mess with the other view (or drop in other fun things like UIVisualEffectView) to obscure the camera feed.
File a bug with Apple about sceneView.background not getting the full set of shading customization options that nodes and materials get (filters, shader modifiers, full shader programs) etc, without which customizing the background is much more difficult than customizing other aspects of the scene.
I achieved this effect by creating a SCNNode with a SCNSphere geometry and keeping it attached to the camera using ARSCNView.pointOfView.
override func viewDidLoad() {
super.viewDidLoad()
let sphereFogNode = makeSphereNode()
arView.pointOfView!.addChildNode(sphereFogNode)
view.addSubview(arView)
}
private static func makeSphereGeom() -> SCNSphere {
let sphere = SCNSphere(radius: 5)
let material = SCNMaterial()
material.diffuse.contents = UIColor(white: 1.0, alpha: 0.7)
material.isDoubleSided = true
sphere.materials = [material]
return sphere
}
private static func makeSphereNode() -> SCNNode {
SCNNode(geometry: makeSphereGeom())
}
Clipping Outside Sphere
This darkens the camera along with anything outside the bounds of the sphere. Hit testing (ARFrame.hitTest) does not respect the sphere boundary. You can receive results from outside the sphere.
Things which are outside your sphere will be seen through the sphere's opacity. It seems that non-transparent things will become transparent outside the sphere.
The white part is the plane inside a sphere and the grey is the plane outside the sphere. The plane is a solid white and non-transparent. I tried using SCNScene.fog* to clip SceneKit graphics outside the sphere, but it seems that fog doesn't occlude rendered content, just affects its appearance. SCNCamera.zFar doesn't work also as it clips based on the Z-distance, not on the straight line distance between the camera and target.
Just make your sphere big enough and everything will look fine.
For those who want to implement rickster's option "Use geometry to create a "physical" backdrop for your scene" here is my code (plane and transparency of it made in .scnassets)
guard let backgroundPlane = sceneView.scene.rootNode.childNode(withName: "background", recursively: false) else { return }
backgroundPlane.removeFromParentNode()
backgroundPlane.position = SCNVector3Make(0, 0, -2)
sceneView.pointOfView?.addChildNode(backgroundPlane)
This question already has answers here:
ARKit hide objects behind walls
(5 answers)
SceneKit Culling Plane
(2 answers)
Closed 5 years ago.
So to be clear on my goals, since I don't have any code to share... Lets say I have a SCNNode which is positioned between the camera and another SCNNode. The first SCNNode is a SCNBox, but has no texture, thus the second SCNNode can be seen behind it. I want to give the first node a transparent material, but to have it also occlude all nodes behind it, as though it was opaque. In a regular scene, this would mean that you could see the scene background color, black perhaps, but I'm planning on doing this in ARKit, which makes more sense as that means you'd simply see the real world behind it.
You can use material with clear color:
extension SCNMaterial {
convenience init(color: UIColor) {
self.init()
diffuse.contents = color
}
convenience init(image: UIImage) {
self.init()
diffuse.contents = image
}
}
let clearMaterial = SCNMaterial(color: .clear)
boxNode.materials = [clearMaterial]
I've tested my idea from the comments and it seems to work, not be perfectly but I'll expand later on this point.
To support the rendering process SceneKit uses a depth buffer and render a point only if it will be in front of what is saved in said buffer so we have to tell SceneKit to render your see-through cube first then all the other nodes, so leave your cube node renderingOrder property to 0 (the default value) then set all the other nodes renderingOrder to a higher value, i.e. 1, 10... Normally for transparent objects you don't want to write to the depth buffer so you can see objects behind but it's not the case so leave your cube material writeToDepthBuffer property to true (the default value). Last thing to do is to make your cube transparent, you can use the default material and then add
cube.geometry?.firstMaterial?.transparency = 0.00000001
As I've said before this method is not perfect and it feels more of a workaround... but it works. The reason why we don't set the transparency to exactly 0 is that if we do so is like the cube is not even there, that is fully transparent pixel are not saved to the depth buffer.
So I know how to change the color of the entire sprite node but is there a way to say change the color of only the arms of a character or is the only way to do this through assigning the sprite a new image with the appropriate color? Thanks for the help in advance.
The only way to change the color of only a part of a SKSpriteNode can only be done with other nodes involved. But I only recommend this if the area you are changing color of is simple, such as a rect or circle. To do this, I also recommend you use a SKNode to wrap its child nodes.
For example:
var character = SKNode()
var characterImage = SKSpriteNode(imageNamed:"yourImage.png")
//Set sizes, anchor points, what not
character.addChild(characterImage)
var colorChanger = SKSpriteNode(color: yourColor, size: size of arms or whatever)
//Set position -> (colorChange.position)
colorChanger.name = "colorizer"
character.addChild(colorChanger )
With this, you can also change the color of the arms whenever you want, using childNode(withName:) to access colorChanger.
But, if the arms are to complicated, you do have to use separate images. Using a SKNode to then wrap these images is recommended as well.
PS: If you're new to Swift and Spritekit, the reason it's better to use SKNodes as wrappers is because it makes it much easier to move them, rotate them, etc, because you just need to change the location, rotation, etc, of one node instead of many.
I'm not aware of any way to change the color property of a specific part of the same SKSpriteNode. Something you could do is to make textures of the colors you need the sprite to be and change the texture property instead of the color.
So I am trying to get a very basic "flashlight"-style thing going in one of my games.
The way I was getting it to work, was having a layer on top of my game screen, and this layer would draw a black rectangle with ~ 80% opacity, creating the look of darkness on top of my game scene.
ccDrawSolidRect(ccp(0,0), ccp(480,320), ccc4f(0, 0, 0, 0.8));
What I want to do is draw this rectangle EVERYWHERE on the screen, except for around a cone of vision that will represent the "light source".
What this would create would be a dark overlay on top of everything except for the light, giving it the illusion of a torch/light/flashlight.
The only way I can foresee this happening is by using ccDrawSolidPoly(), but since the position of the light source changes, so would the vertices for the poly.
Any suggestions on how to achieve this would be great.
You can use ccDrawSolidPoly() and avoid having to manually update vertices. For this you can create a new subclass of CCNode representing your light object, and do your custom shape drawing in its -(void)draw method.
The ccDraw...() functions will draw relative to the local sprite coordinates, so you can then move and rotate your new sprite to suit your needs and cocos2d will do the vertices transformations for you.
Update: I found out that you might be better off subclassing CCDrawNode instead of CCNode, as it has some facilities for raw OpenGL drawing (OpenGL's vertexArrayBuffer and vertexBufferObject internal variables and a buffer for vertices, their colors and their texCoords). If your stuff is very simple, maybe subclassing the plain CCNode is enough.
Could a png be used instead as a mask, as the layer above
Like that binocular vision you sometimes see in cartoons?
Or a filter similar to a photoshop mask that darkens as it grows outwardly to wards the edge of the screen
Just a thought anyway...
A picture of more of what your trying to explain might be good too
I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.