I'm developing an application that uses SceneKit API and I faced the problem that I basically cannot apply a texture to a sphere object and keep texture's pre-defined size. I'm able to either scale the texture up to the object's surface (default SceneKit's behavior) or repeat it. But what I want to achieve is similar to the billiard ball:
Let's say I have a a .png image of a white circle with the number "13" at the center of it. I want to put it like the one on the picture. Generally, I want it to be scaled up to a fixed size, not the whole surface.
I use material.diffuse.contents property of SCNGeometry to set the texture and I found contentsTransform property in the documentation which can probably help me sort it out but I didn't find an explanation how to use it with the sphere object.
Is it something that is possible with pure SceneKit? Any help would be very appreciated.
You need a preliminarily modelled geometry (polygonal sphere in your case) and its UV Mapped texture that's made in 3D modelling software (Autodesk Maya for instance).
Watch this short movie to find out how to get UV-mapped texture.
Related
I'm building a 3D app that uses SceneKit. My scene will have various 3D objects and a moveable perspective camera.
The user can load a 2D image into the scene, which I will display on a 3D plane using the image as the material.
What I need to be able to do is to initially show the image as if it were actually 2D, where the pixel width and height are the same as the image and it is not distorted by the camera perspective. So basically I need to know how to position that plane in relation to the camera to make it look 2D.
Thanks in advance for any tips :)
It's not clear where you are getting stuck.
If you're just looking for where to start, look at SCNPlane and SCNBillboardConstraint.
I do not know SceneKit, so what I suggest might not be doable in the specific program, but could you perhaps use an orthographic camera perspective? It removes a lot of the visual depth from a scene, combining that with some flat lighting might accomplish the look you are going for.
I've been trying without success to extract face features, for instance the mouth, from ARSCNFaceGeometry in order to change their color or add a different material.
I understand I need to create an SCNGeometry for which I have the SCNGeometrySource but haven't been able to create the SCNGeometryElement.
Have tried creating it from ARFaceAnchor in update(from faceGeometry: ARFaceGeometry) but so far have been unable.
Would really appreciate someone help
ARSCNFaceGeometry is a single mesh. If you want different areas of it to be different colors, your best bet is to apply a texture map (which you do in SceneKit by providing images for material property contents).
There’s no semantic information associated with the vertices in the mesh — that is, there’s nothing that says “this point is the tip of the nose, these points are the edge of the upper lip, etc”. But the mesh is topologically stable, so if you create a texture image that adds a bit of color around the lips or a lightning bolt over the eye or whatever, it’ll stay there as the face moves around.
If you need help getting started on painting a texture, there are a couple of things you could try:
Create a dummy texture first
Make a square image and fill it with a double gradient, such that the red and blue component for each pixel is based on the x and y coordinate of that pixel. Or some other distinctive pattern. Apply that texture to the model, and see how it looks — the landmarks in the texture will guide you where to paint.
Export the model
Create a dummy ARSCNFaceGeometry using the init(blendShapes:) initializer and an empty blendShapes dictionary (you don’t need an active ARFaceTracking session for this, but you do need an iPhone X). Use SceneKit’s scene export APIs (or Model I/O) to write that model out to a 3D file of some sort (.scn, which you can process further on the Mac, or something like .obj).
Import that file into your favorite 3D modeling tool (Blender, Maya, etc) and use that tool to paint a texture. Then use that texture in your app with real faces.
Actually, the above is sort of an oversimplification, even though it’s the simple answer for common cases. ARSCNFaceGeometry can actually contain up to four submeshes if you create it with the init(device:fillMesh:) initializer. But even then, those parts aren’t semantically labeled areas of the face — they’re the holes in the regular face model, flat fill-ins for the places where eyes and mouth show through.
I am creating a model of Saturn and I'm having problems when creating the rings. I found this asset
but when I try to set it as a diffuse, it projects like this
How can I control the way a texture projects over a geometry?
I found the solution. By replacing the cylinder with a torus and rotating the image 90 degrees, XCode did the mapping itself.
But there must be a better way.
This isn’t specifically a SceneKit or IOS issue, the same would apply in any 3D package.
You can control the way a texture projects over a geometry by using UV mapping. In practice that means you map the vertices and faces of the model on to the texture in software such a Blender. The texture you use now is meant to be tiled but because the lines on the texture are perfectly straight it will never look optimal.
To save yourself some trouble, use a texture that shows the entire ring from the top/above.
I think the best way is to use SCNTube.
I created a SCNPlane in my scene. I set the diffuse content of the SCNPlane to an image. Now I need to display other details on the plane (e.g. the name of the plane, and other simple drawings like an arrow).
This seems like a simple task but I could not find any relevant information. Should I create other geometries and attach them to the plane? If so, how do I do that without z-fighting occurring?
Materials in SceneKit are very flexible. When you are assigning something to the diffuse or ambient property you are assigning a SCNMaterialProperty. From Apples docs you can assign:
An image object, or a path or URL to an image file
A specially formatted image or array of six images to be used as a cube map
A Core Animation layer or layer hierarchy, which itself may contain animated content
A SpriteKit texture providing a static image, or an entire SpriteKit scene rendering animated 2D content.
I haven't got the Core Animations layers to work properly, maybe someone else can give more information on it, but I have been using SpriteKit and it is really easy to set up.
Create a SKScene and add animate it as much as you want
Set the size of your scene. SceneKit will scale this up if your plane is bigger than your size.
Add it to your materials diffuse property
Hello i can't for the life of me work out how to add a 2D image in the scene using scenekit is it even possible? What I'm trying to accomplish is have a 3D flying plane over a 2D background image but the background image can't cover the whole screen. Thanks to anyone that can help
Others have said plenty good suggestions above.
Let me summarize each solution with different scenarios:
Using scnScene.background: Easiest, but fullscreen and static. You cannot display in a smaller area of the scene, nor can you customize the transformations.
Using scnScene.overlaySKScene to display a 2D SKScene and adding image sprite. This is probably what fits your scenario. If you want static but with configurable transformations, this is a good choice. Notably overlaySKScene is recommended by Apple to create HUD in your 3D scene.
However, if you want the 2D content to track the 3D movement, you need to do some coordinate conversion from 3D space to 2D space every frame. Bad news is the job is done by CPU, which will add performance drag. (Believe me, this frustrate me too!)
Using SCNNode with SCNPlane geometry and set the diffuse.content with the image; add SCNBillBoardConstraint to the node to ensure it's always facing the camera.
Most flexible. However if your camera is not orthographic, the picture is going to zoom as the fov of camera changes, which doesn't look 2D but rather a hanging 3D billboard.
SceneKit and SpriteKit play very nicely together.
Implement the background 2D image as a SceneKit node with SCNPlane geometry, using your image as the plane's material. If you want to get fancier, use a full, live SpriteKit scene as the SCNPlane's material. Place that node at the far end of your camera's frustum, with your 3D aircraft in front of it.
You might also consider providing a cube map (skybox) as your scene's background. See SCNScene.background.
You can use the background property on SCNScene to set a background image.
scnScene.background.contents = UIImage(named: "myImage")
Contents
You can set a value for this property using any of the following
types:
A color (NSColor/UIColor or CGColorRef), specifying a constant color
across the material’s surface
An image (NSImage/UIImage or CGImageRef), specifying a texture to be
mapped across the material’s surface
An NSString or NSURL object specifying the location of an image file
An array of six images (NSArray), specifying the faces of a cube map
A Core Animation layer (CALayer)
A texture (SKTexture, MDLTexture, MTLTexture, or GLKTextureInfo)
A Sprite Kit scene (SKScene)