Given the texture in the right way - ios

So as you can see, the texture of the photoFrame is a square image. But when I set it to the diffuse contents, the effect is terrible. So how can I display the square image in the rectangle frame but not stretch the image.

A lot of what you see depends on what geometry the texture is mapped onto. Assuming those picture frames are SCNPlane or SCNBox geometries, the face of the frame has texture coordinates ranging from (0,0) in the upper left to (1,1) in the lower right, regardless of the geometry's dimensions or aspect ratio.
SceneKit texture maps images such that the top left of the image is at texture coordinate (0,0) and the lower right is at (1,1) regardless of the pixel dimensions of the image. So, unless you have a geometry whose aspect ratio matches that of the texture image, you're going to see cases like this where the image gets stretched.
There are a couple of things you can do to "fix" your texture:
Know (or calculate) the aspect ratios of your image and the geometry (face) you want to put it on, then use the material's contentsTransform to correct the image.
For example, if you have an SCNPlane whose width is 2 and height is 1, and you assign a square image to it, the image will get stretched horizontally. If you set the contentsTransform to a matrix created with SCNMatrix4MakeScale(1,2,1) it'll double the texture coordinates in the horizontal direction, effectively scaling the image in half in that direction, "fixing" the aspect ratio for your 2:1 plane. Note that you might also need a translation, depending on where you want your half-width image to appear on the face of the geometry.
If you're doing this in the scene editor in Xcode, contentsTransform is the "offset", "scale", and "rotation" controls in the material editor, down below where you assigned an image in your screenshot.
Know (or calculate) the aspect ratio of your geometry, and at least some information about the size of your image, and create a modified texture image to fit.
For example, if you have a 2:1 plane as above, and you want to put 320x480 image on it, create a new texture image with dimensions of 960x480 — that is, matching the aspect ratio of the plane. You can use this image to create whatever style of background you want, with your 320x480 image composited on top of that background at whatever position you want.

I change the scale and offset and WrapT property in the material editor. And the effect is good. But when I run it, I couldn't get the same effect. So I try to program by change the contentsTransform property. But the scale, offset they both affect the contentsTransform. So if the offSet is (0, -4.03) and the Scale is (1, 1,714), what is the contentsTransform?

Related

Mapping textures to 2 triangles in roblox

I am currently trying to map textures using image labels onto 2 different triangles (because im using right angle wedges so i need 2 to make scalene triangles), but here is the problem, I can only set positional, size, and rotational data so I need to figure out how I can use this information to correctly map the texture onto the triangle
the position is based on the topleft corner and size of triangle (<1,1> corner is at the bottom right and <0,0> corner is at top left) and the size is based on triangle size also (<1,1> is same size as triangle and <0,0> is infinitely tiny) and rotation is central based.
I have the UV coordinates (given 0-1) and face vertices, all from an obj file. The triangles in 3D are made up of 2 wedges which are split at a right angle from the longest surface and from the opposite angle.
I don't quite understand this however it may be help to change the canvas properties on the Surface GUI

SceneKit / ARKit: map screen shape into 3D object

I have a UIView in the middle of the screen, 100x100 in this case, and it should function like a "target" guideline for the user.
When the user presses the Add button, a SCNBox should be added on the world at the exact same spot, with width / height / scale / distance / rotation corresponding the UIView's size and position on the screen.
This image may help understanding what I mean.
The UIView's size may vary, but it will always be rectangular and centered on the screen. The corresponding 3D model also may vary. In this case, a square UIView will map to a box, but later the same square UIView may be mapped into a cylinder with corresponding diameter (width) and height.
Any ideas how I can achieve something like this?
I've tried scrapping the UIView and placing the box as the placeholder, as a child of the sceneView.pointOfView, and later converting it's position / parent to the rootNode, with no luck.
Thank you!
Get the center position of the UIView in its parent view, and use the SCNRenderer’s unprojectPoint method to convert it to 3D coordinates in the scene, to get the position where the 3D pbject should be placed. You will have to implement some means to determine the z value.
Obviously the distance of the object will determine its size in screen but if you also want to scale the object you could use the inverse of the zoom level of your camera as the scale. With zoom level I mean your own means of zooming (e.g. moving the camera closer than a default distances would create smaller scale models). If the UIView changes in size, you could in addition or instead of the center point unproject all its corner points individually to 3D space, but it may just be easier to convert the scale of the uiview to the scale of the 3D node. For example, if you know the maximum size the UIView will be you can express a smaller view as a percentage of its max size and use the same percentage to scale the object.
You also mentioned the rotation of the object should correspond to the UIView. I assume that means you want the object to face the camera. For this you can apply a rotation to the object based on the .transform property of the pointOfView node.

What is the meaning of "nonuniformly scaled texture" in SpriteKit?

On SKCropNode class reference, some examples to specify a mask are given.
Here they are:
This means a crop node can use simple masks derived from a piece of artwork, but it can also use more sophisticated masks. For example, here are some ways you might specify a mask:
An untextured sprite that limits content to a rectangular portion of the scene.
A textured sprite is a precise per-pixel mask. But consider also the benefits of a nonuniformly scaled texture. You could use a nonuniformly scaled texture to create a mask for a resizable user-interface element (such as a health bar) and then fill the masked area with dynamic content.
A collection of nodes can dynamically generate a complex mask that changes each time the frame is rendered.
The second example introduce nonuniformly scaled texture: what's the meaning of this?
This does not help me to understand this second example!
A non-uniformly scaled texture is a texture that is applied to a sprite with xScale != yScale.

Pixelated circles when scaling with SKSpriteNode

The perimeter around a circle gets pixelated when scaling down the image.
The embedded circle image has a radius of 100 pixels. (The circle is white so click around the blank space, and you'll get the image.) Scaling down using SpriteKit causes the border to get very blurry and pixelated. How to scale up/down and preserve sharp borders in SpriteKit? The goal is to use a base image for a circle and create circle images of different sizes with this one base image.
// Create dot
let dot = SKSpriteNode(imageNamed: "dot50")
// Position dot
dot.position = scenePoint
// Size dot
let scale = radius / MasterDotRadius
println("Dot size and scale: \(radius) and \(scale)")
dot.setScale(scale)
dot.texture!.filteringMode = .Nearest
It seems you should use SKTextureFilteringLinear instead of SKTextureFilteringNearest:
SKTextureFilteringNearest:
Each pixel is drawn using the nearest point in the texture. This mode
is faster, but the results are often pixelated.
SKTextureFilteringLinear:
Each pixel is drawn by using a linear filter of multiple texels in the
texture. This mode produces higher quality results but may be slower.
You can use SKShapeNode which will act better while scale animation, but end result (when dot is scaled to some value) will be almost pixelated as when using SKSpriteNode and image.

Calculating position of object so it matches screen pixels

I would like to move a 3D plane in a 3D space, and have the movement match
the screens pixels so I can snap the plane to the edges of the screen.
I have played around with the focal length, camera position and camera scale,
and I have managed to get a plane to match the screen pixels in terms of size,
however when moving the plane things are not correct anymore.
So basically my current status is that I feed the plane size with values
assuming that I am working with standard 2D graphics.
So if I set the plane size to 128x128, it more or less is viewed as a 2D sqaure with that
exact size.
I am not using and will not use Orthographic view, I am using and will be using Projection view because my application needs some perspective to it.
How can this be calculated?
Does anyone have any links to resources that I can read?
you need to grab the transformation matrices you use in the vertex shader and apply them to the point/some points that represents the plane
that will result in a set of points in -1,-1 to 1,1 (after dividing by w) which you will need to map to the viewport

Resources