How does sceneKit deal with various screen sizes? Do geometries become bigger on larger screens? Does camera angle become bigger and wider? Do physics simulations look exactly the same on various phones? Thanks for help.
Do geometries become bigger on larger screens?
Nope, geometry size will be the same size, in the arbitrary unit you use (Meters by default). They will however appear bigger/smaller on screen but the size is the same (see next answer).
Does camera angle become bigger and wider?
Slightly. The FOV adapts on an axis to fit the aspect of the screen. You can try that out by running your app on the resizable simulator and see how it reacts.
Do physics simulations look exactly the same on various phones?
Yes and no. Overall, the simulation will feel the same (weight, speed, behaviour), but in details the randomness of the physics engine and the differences in hardware will make the simulations slightly different and random every time.
Related
I'm using SceneKit. I have created and assigned my own camera to the scene and I have adjusted its xFov and yFov. When I set a value higher than 50, there starts to be some distortion. Everything near the edges of the screen is stretched – almost like the camera suddenly becomes a "Fish Eye."
I need the xFov and yFov to be above 50 (I actually need it to be 100), but I can't have that distortion. What do I do?
What you're asking isn't theoretically impossible per se, but theoretically interesting at least.
What happens to a physical camera when you increase the field of view? The wider it gets, the more "fisheye" it looks. The projection matrix and perspective divide of a 3D graphics pipeline like SceneKit works in a similar way. It looks a little different because it's a rectilinear transformation rather than the effect of a spherical lens, but it's the same general idea — it maps a volume (called a frustum) of 3D space "seen" by the camera onto the viewing plane. This is a general aspect of 3D graphics, not something specific to SceneKit, so you can find plenty of good tutorials that cover the underlying math pretty well.
That frustum projection fixes a certain relationship between the amount of viewing angle something takes up and its width on the viewing plane. You can't really change that relationship and still have a linear (well, rational, but mostly linear) transformation that 3D hardware can apply with a single matrix multiplication (and perspective divide).
You could, in theory, define a different relationship — say, one where a large angular size corresponds to a much larger part of the viewing plane near the center of the view, but to a much smaller part farther away from the center. But you can't do that in the camera transform... You'd have to do such calculations pixel by pixel in some kind of post-processing shader. (In fact, this is generally how rendering for the lenses of a VR headset works.)
I have a game that is extremely procedural and because of this one of the textures in the games can end up covering the space of (113 x 10) to (682 x 768) on iPad retina. Furthermore it isn't always even in the same aspect ratio. Furthermore the unscaled texture has to be at an aspect ratio of about 2:11 making things awkward.
I can not figure out how Xcode perceives pdf vector files.
For example the first time I made my svg into a pdf and followed all the instructions for making a single vector slot it ended up being all pixelated/anti-aliased on some devices. And I know that .pdf vectors can be rasterized to any resolution without loosing quality! So I am pretty sure Xcode compiles vectors into bitmaps for the SKTexture. And as far as I can tell the amount of pixels that it rasterizes from the vector is not even variable on a device.
For example after that one I scaled up the svg to double the pts of its dimensions and exported it. Now on some devices the texture looked perfect yet on others it looked like a rasterized mess! And it didn't seem to happen because of one device resolution being bigger then the other.
So then I decided to scale the vector one more time so at this point it had 4x more points then the vector started as. Things worked wonderfully, until I tested on the iPhone 6 and instead of being beautiful the texture got blacked out, I am fairly certain this is because the texture was to big for Spritekit. There is no happy medium I can find.
My goal is to get this vector file looking great on every device regardless of how it is scaled.
I have looked around to see what SpriteKit uses for its pt to pixel conversions and I haven't found any numbers that check out. How I can make my texture work for this project? How does Spritekit decide how many pixels to rasterize a vector as when I make an SKTexture.
Edit:
Here are some hypothetical questions.
H1. How many points wide and tall would a pdf vector need to be for it to be able to be up to these dimensions on any device. width = width of device, height = height of device / 2. So basically if I had a vector I knew was going to occupy that space what should its dimensions (in points) be?
H2. Is the amount of pixels rasterized for every point directly proportionate to the amount of pixels on the display or is their more then just pixels on a screen?
Here's an example of artifact that I have an issue with (picture hopefully worth a thousand words -- I've scaled both images to make the detail more apparent):
64x64 native iOS crop (using setScale(2.0)) scaled to 128x128 using nearest neighbour in external application:
64x64 crop scaled to 128x128 using nearest neighbour in external application, the rotation is applied before the scale transform:
From the images, the first is the undesirable and the second is what I'd like. The first shows the "scaled pixels" are rotated. In my case (even though the pixel is actually a block of pixels), I want the rotation to happen before the scaling has happened (if there was some way to affect the order of the transformations).
I'm basically trying to get a chunkier/pixellated look (common enough), but I want all translations to sit on this new pixel grid (rotations are especially ugly). I've seen some applications do this, are they just creating their own game engine? or am I missing something in SpriteKit?
I think I got all the details required, if I missed any, ask.
As I am porting an iPhone app that uses SpriteKit to iPad, I have been able to scale all the screen elements and font sizes using the height ratio of the iPhone 5 to iPad screen.
Everything looks proportional, except for the physics since there is more area.
So the question is, how do I scale the physics along with the screen using that height ratio?
Perhaps changing density or mass of all the nodes? How would I do it, mathematically using the ratio so that it is perfect?
You can configure the SKPhysicsWorld object attached to the SKScene.
It changes:
The gravity property that applies an acceleration to volume-based bodies in the simulation.
The speed property determines the rate at which the simulation runs.
You can set those properties to values that seems good to you on an iPad. Changing the Physic World changes physics for all SKObjects.
Android and iOS have a concept of a "density independent pixel" so your layouts look the same on devices with different densities and screen sizes.
Up until now I've written code to manually space elements using pixels (i.e. I want this button to be 10 pixels from the left side of the screen). This is great on a Curve, but when I load it up on a Bold the resolution is much higher, so 10 pixels is a much smaller physical space.
What are the best practices for multiple screen sizes on BlackBerry? Is there any easy way to define a density independent pixel? RIM seems to not offer much in terms of documentation or APIs to make this easy.
Points are density independent pixels (to a good degree of accuracy).
For BlackBerry, the most relevant class is net.rim.device.api.ui.Ui which defines a UNITS_pt constant (and a UNITS_px constant), a convertSize method to convert between points and pixels (since operations on Graphics take pixels instead of points).
A useful methodology for BlackBerry apps is to take everything in relation to your font sizes, which you define in points - there's a version of net.rim.device.api.ui.Font.derive that takes a units parameter and makes it easy to get fonts with a particular point size.
Of course, you can't take anything for granted - defining things in points will make things easier, but with BlackBerry you deal with lots of different pixel densities and aspect ratios so test thoroughly, at least on the simulators.