I am creating a model of Saturn and I'm having problems when creating the rings. I found this asset
but when I try to set it as a diffuse, it projects like this
How can I control the way a texture projects over a geometry?
I found the solution. By replacing the cylinder with a torus and rotating the image 90 degrees, XCode did the mapping itself.
But there must be a better way.
This isn’t specifically a SceneKit or IOS issue, the same would apply in any 3D package.
You can control the way a texture projects over a geometry by using UV mapping. In practice that means you map the vertices and faces of the model on to the texture in software such a Blender. The texture you use now is meant to be tiled but because the lines on the texture are perfectly straight it will never look optimal.
To save yourself some trouble, use a texture that shows the entire ring from the top/above.
I think the best way is to use SCNTube.
Related
I'm developing an application that uses SceneKit API and I faced the problem that I basically cannot apply a texture to a sphere object and keep texture's pre-defined size. I'm able to either scale the texture up to the object's surface (default SceneKit's behavior) or repeat it. But what I want to achieve is similar to the billiard ball:
Let's say I have a a .png image of a white circle with the number "13" at the center of it. I want to put it like the one on the picture. Generally, I want it to be scaled up to a fixed size, not the whole surface.
I use material.diffuse.contents property of SCNGeometry to set the texture and I found contentsTransform property in the documentation which can probably help me sort it out but I didn't find an explanation how to use it with the sphere object.
Is it something that is possible with pure SceneKit? Any help would be very appreciated.
You need a preliminarily modelled geometry (polygonal sphere in your case) and its UV Mapped texture that's made in 3D modelling software (Autodesk Maya for instance).
Watch this short movie to find out how to get UV-mapped texture.
I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.
Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.
But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format.
I tried to take this project as a basis:
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera
In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.
Please tell me if there is a way to export MTKView to SCNScene or directly to .obj?
If there is no such method, then how to make a 3D model from AVDepthData?
Thanks.
It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)
If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...
It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:
Get ARFaceGeometry from a face tracking session.
Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)
Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)
Create an empty MDLAsset and add the mesh to it.
Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).
That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:
Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.
Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.
Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:
Consider whether that's a strong requirement?
Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)
I would like to be able to write pixels to the image of an SKSprite in iOS7. How do I do this?
Applications? Graphing for example. Random images. Applying damage effects perhaps to a sprite.
You can't directly write to an SKSpriteNode's pixel data in iOS 7. (This is called out explicitly in Apple's WWDC 2013 videos about sprite kit, which I highly recommend.) The only thing you can do is to change its texture member. The Apple docs on sprites give a variety of techniques to do that.
If you really need to programmatically create an image, you can always do so with a pixel buffer and then make it into an SKTexture with textureWithData:size: and related methods. For explosions and damage effects, though, there are probably better ways to do this, such as particle systems or masking out or combining the underlying sprite with other sprites.
How to draw a line in Sprite-kit
I didn't know you could draw lines and such directly onto nodes or scenes. This works for my purposes.
In the past day of my foray into OpenGL ES 2.0, while attempting to apply two projective textures -- one sprite animation and one video file texture -- to a skybox, I started simply pounding my hands on the keyboard like stubs, and miraculously it all started working.
However, the texture created from the video file is flipped upside-down. In other words, the texture coordinates for (0,0) seem to be mapping to (0,1), and vice-versa.
The function which creates the video file texture from a CVImageBufferRef, CVOpenGLESTextureCacheCreateTextureFromImage(), includes a parameter "CFDictionaryRef textureAttributes."
CVOpenGLESTextureCache.h helpfully explains: "A CFDictionaryRef containing attributes to be used for creating the CVOpenGLESTexture objects. May be NULL."
I immediately thought of GLKTextureLoader, which allows you to pass in an options dictionary, with one of the available options being used to flip the texture around.
So, I'm a bit confused on two points:
Will passing in a CFDictionaryRef of attributes allow me to easily change things about the texture, like rotation? Or does it somehow mean 'attribute' in the shader-sense? (I don't think it very likely means the shader-sense, but I also think it's odd that it calls them attributes and not options.)
Is there a list somewhere of the key/value pairs that will tell it to do useful things?
I wanted to look into this before finding some other way to flip it around, since if it's possible to do it here, it seems like it would be the most straightforward way, if the procedure is indeed parallel to GLKTextureLoader's options.
After reading through apple's RosyWriter sample code again, I've realized that after creating the texture with CVOpenGLESTextureCacheCreateTextureFromImage(), they flip the texture around by modifying the texture coordinates for the vertices.
Since I'm projecting the texture and computing the texture coordinates in the vertex shader, I think the simplest solution for me will be to flip the actual movie file asset around before I drop it into xcode. So that's probably what I'll do for each movie. Just realized what a simple solution that is. That way I don't need to fork my vertex shader code for projections that need to be rotated & don't need to be rotated.
I'd still really appreciate clarification on the attributes argument though, if anybody has info on that.
I want to simulate stroking a carpet, so you would have a graphic of a fury carpet and with your finger you can move around and stroke it. I need to shift pixels and create some fake distortion around where I am touching.
Anyone have any tips ?
Firstly I guess do I have enough to work with assuming I have 1 jpeg of the material. Not any skeleton or 3d file, just a flat image
this can be also improved with 'fur rendering'
I've some examples:
http://www.ozone3d.net/benchmarks/fur/
http://www.xbdev.net/directx3dx/specialX/Fur/index.php
or new demo from NVidia:
http://www.youtube.com/watch?v=2Fp5N-pOxKA - around 35sec
Sounds like a typical task to be solved with OpenGL shaders.
As MrTJ says: Shaders is your key here.
Apart from your diffuse use a second texture as your "carpet" map that you modify. Maybe use the like a normal map, storing a directional vector per texel.
Use your "carpet" map in your shader and distort however you like to create your desired carpet effect.