I have used cage transform in the past, and it worked fine.
I am now in a situation where I can not cage-transform an object.
Instead of transforming, I can only create a cage, and when I finish the cage, it automatically transforms.
I have uploaded a video here.
What am I missing?
I don't see it transforming. When the cage is closed, something happens, but the image is not changed and the cage remains. You can from that point alter the cage and this will alter the image. This is indicated/controlled by this widget int the tool options:
By the way, if you are mapping a rectangle to a 4-points polygon (or vice versa) using the perspective tool can be more accurate ad possibly faster.
What I ACTUALLY wanted to do was to use the Universal transform tool:
Related
Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera
I'm trying to create polygons with an inner border in Konva.
I found this example of doing this with globalCompositeOperation which works well in Konva, as long as there is only one shape. As soon as I try to add a second shape, this obviously doesn't work anymore and the first shape disappears.
It would work if I were to use a different layer for every shape, but of course that's not a solution that scales well.
I tried using a temporary layer as is done in the example but couldn't get it to work.
So I found this example of using group.cache(), which works fine ... until I try to scale the stage, at which point I would have to refresh the cache, otherwise I only get the scaled up cache, which looks bad.
This codesandbox illustrates the problem. (Please note that this uses simple triangles, in reality I work with arbitray polygons)
So is there a way to use cache with scaling? Or alternatively a better way to use globalCompositeOperation with multiple shapes in the same layer? Or some alternative solution?
I found a solution: calling group.cache({pixelRatio: scaleFactor}). I updated the sandbox.
No idea, if this is the best solution, but it works.
The only doc I can find doesn't say much:
A CIVector object whose attribute type is CIAttributeTypePosition and
whose display name is Center.
I did some experiment. I can see it shift some "center". But I want to know what exactly it controls in the underlining pixelation algorithm so that I can use it in a knowledgeable way instead of blindly fumbling.
I think it shifts where the initial sampling point is taken, so that combined with the scale parameter will adjust which pixels are pulled out for the pixelate.
You can try it out in QuartzComposer (or even Acorn (an app written by myself)) and futz around with the parameters to get a good idea as to what it does.
Although I am quite experienced with most frameworks in iOS, I have no clue when it comes to 3D modelling. I even worked with SpriteKit, but never with something like SceneKit.
Now a customer wants a very ambitious menu involving a 3D object, an 'icosahedron' to be exact. I want it to look something like this:
So I just want to draw the lines, and grey out the 'see-through' lines on the back. Eventually I want the user to be able to freely rotate the object in 3D.
I already found this question with an example project attached, but this just draws a simple cube: Stroke Width with a SceneKit line primitive type
I have no clue how to approach a more complex shape.
Any help in the right direction would be appreciated! I don't even need to use SceneKit, but it seemed like the best approach to me. Any other suggestions are welcome.
to build an icosahedron you can use SCNSphere and set its geodesic property to YES.
Using shader modifiers to draw the wireframe (as described in Stroke Width with a SceneKit line primitive type) is a good idea.
But in your case lines are not always plain or dotted — it depends on the orientation of the icosahedron. To solve that you can rely on gl_FrontFacing to determine whether the edge belongs to a front-facing or back-facing triangle.
I have a single snowflake image that I would like to replicate and create snowfall on the screen. We can set the instanceCount to X to create a large number of snowflakes and we can set the instanceTransform to put each instance some distance from the next.
However I am not clear how to make them fall down. Does anyone understands if this class was intended to be used for something like this and if so how should it be properly done?
You apply the animation to the original sublayer.... all the replicated layers follow the same animation, adjusted by the instanceDelay and instanceTransform properties.
Hope this helps!