Understanding how Xcode works with vector files - ios

I have a game that is extremely procedural and because of this one of the textures in the games can end up covering the space of (113 x 10) to (682 x 768) on iPad retina. Furthermore it isn't always even in the same aspect ratio. Furthermore the unscaled texture has to be at an aspect ratio of about 2:11 making things awkward.
I can not figure out how Xcode perceives pdf vector files.
For example the first time I made my svg into a pdf and followed all the instructions for making a single vector slot it ended up being all pixelated/anti-aliased on some devices. And I know that .pdf vectors can be rasterized to any resolution without loosing quality! So I am pretty sure Xcode compiles vectors into bitmaps for the SKTexture. And as far as I can tell the amount of pixels that it rasterizes from the vector is not even variable on a device.
For example after that one I scaled up the svg to double the pts of its dimensions and exported it. Now on some devices the texture looked perfect yet on others it looked like a rasterized mess! And it didn't seem to happen because of one device resolution being bigger then the other.
So then I decided to scale the vector one more time so at this point it had 4x more points then the vector started as. Things worked wonderfully, until I tested on the iPhone 6 and instead of being beautiful the texture got blacked out, I am fairly certain this is because the texture was to big for Spritekit. There is no happy medium I can find.
My goal is to get this vector file looking great on every device regardless of how it is scaled.
I have looked around to see what SpriteKit uses for its pt to pixel conversions and I haven't found any numbers that check out. How I can make my texture work for this project? How does Spritekit decide how many pixels to rasterize a vector as when I make an SKTexture.
Edit:
Here are some hypothetical questions.
H1. How many points wide and tall would a pdf vector need to be for it to be able to be up to these dimensions on any device. width = width of device, height = height of device / 2. So basically if I had a vector I knew was going to occupy that space what should its dimensions (in points) be?
H2. Is the amount of pixels rasterized for every point directly proportionate to the amount of pixels on the display or is their more then just pixels on a screen?

Related

How does sceneKit deal with various screen sizes?

How does sceneKit deal with various screen sizes? Do geometries become bigger on larger screens? Does camera angle become bigger and wider? Do physics simulations look exactly the same on various phones? Thanks for help.
Do geometries become bigger on larger screens?
Nope, geometry size will be the same size, in the arbitrary unit you use (Meters by default). They will however appear bigger/smaller on screen but the size is the same (see next answer).
Does camera angle become bigger and wider?
Slightly. The FOV adapts on an axis to fit the aspect of the screen. You can try that out by running your app on the resizable simulator and see how it reacts.
Do physics simulations look exactly the same on various phones?
Yes and no. Overall, the simulation will feel the same (weight, speed, behaviour), but in details the randomness of the physics engine and the differences in hardware will make the simulations slightly different and random every time.

Are there any drawing options that control how SKNodes are displayed when scaled?

Here's an example of artifact that I have an issue with (picture hopefully worth a thousand words -- I've scaled both images to make the detail more apparent):
64x64 native iOS crop (using setScale(2.0)) scaled to 128x128 using nearest neighbour in external application:
64x64 crop scaled to 128x128 using nearest neighbour in external application, the rotation is applied before the scale transform:
From the images, the first is the undesirable and the second is what I'd like. The first shows the "scaled pixels" are rotated. In my case (even though the pixel is actually a block of pixels), I want the rotation to happen before the scaling has happened (if there was some way to affect the order of the transformations).
I'm basically trying to get a chunkier/pixellated look (common enough), but I want all translations to sit on this new pixel grid (rotations are especially ugly). I've seen some applications do this, are they just creating their own game engine? or am I missing something in SpriteKit?
I think I got all the details required, if I missed any, ask.

Paint a very high resolution textured object (sphere) in OpenGL ES

I'm drawing planets in OpenGL ES, and running into some interesting performance issues. The general question is: how best to render "hugely detailed" textures on a sphere?
(the sphere is guaranteed; I'm interested in sphere-specific optimizations)
Base case:
Window is approx. 2048 x 1536 (e.g. iPad3)
Texture map for globe is 24,000 x 12,000 pixels (an area half the size of USA fits the full width of screen)
Globe is displayed at everything from zoomed in (USA fills screen) to zoomed out (whole globe visible)
I need a MINIMUM of 3 texture layers (1 for the planet surface, 1 for day/night differences, 1 for user-interface (hilighting different regions)
Some of the layers are animated (i.e. they have to load and drop their texture at runtime, rapidly)
Limitations:
top-end tablets are limited to 4096x4096 textures
top-end tablets are limited to 8 simultaneous texture units
Problems:
In total, it's naively 500 million pixels of texture data
Splitting into smaller textures doesn't work well because devices only have 8 units; with only a single texture layer, I could split into 8 texture units and all textures would be less than 4096x4096 - but that only allows a single layer
Rendering the layers as separate geometry works poorly because they need to be blended using fragment-shaders
...at the moment, the only idea I have that sounds viable is:
split the sphere into NxM "pieces of sphere" and render each one as separate geometry
use mipmaps to render low-res textures when zoomed out
...rely on simple culling to cut out most of them when zoomed in, and mipmapping to use small(er) textures when they can't be culled
...but it seems there ought to be an easier way / better options?
Seems that there are no way to fit such huge textures in memory of mobile GPU, even into the iPad 3 one.
So you have to stream texture data. The thing you need is called clipmap (popularized by id software with extended megatexture technology).
Please read about this here, there are links to docs describing technique: http://en.wikipedia.org/wiki/Clipmap
This is not easily done in ES, as there is no virtual texture extension (yet). You basically need to implement virtual texturing (some ES devices implement ARB_texture_array) and stream in the lowest resolution possible (view-dependent) for your sphere. That way, it is possible to do it all in a fragment shader, no geometry subdivision is required. See this presentation (and the paper) for details how this can be implemented.
If you do the math, it is simply impossible to stream 1 GB (24,000 x 12,000 pixels x 4 B) in real time. And it would be wasteful, too, as the user will never get to see it all at the same time.

Developing for multiple screen sizes on BlackBerry

Android and iOS have a concept of a "density independent pixel" so your layouts look the same on devices with different densities and screen sizes.
Up until now I've written code to manually space elements using pixels (i.e. I want this button to be 10 pixels from the left side of the screen). This is great on a Curve, but when I load it up on a Bold the resolution is much higher, so 10 pixels is a much smaller physical space.
What are the best practices for multiple screen sizes on BlackBerry? Is there any easy way to define a density independent pixel? RIM seems to not offer much in terms of documentation or APIs to make this easy.
Points are density independent pixels (to a good degree of accuracy).
For BlackBerry, the most relevant class is net.rim.device.api.ui.Ui which defines a UNITS_pt constant (and a UNITS_px constant), a convertSize method to convert between points and pixels (since operations on Graphics take pixels instead of points).
A useful methodology for BlackBerry apps is to take everything in relation to your font sizes, which you define in points - there's a version of net.rim.device.api.ui.Font.derive that takes a units parameter and makes it easy to get fonts with a particular point size.
Of course, you can't take anything for granted - defining things in points will make things easier, but with BlackBerry you deal with lots of different pixel densities and aspect ratios so test thoroughly, at least on the simulators.

How would you find the height of objects given an image?

This isn't exactly a programming question exactly. I just want to know what your approach would be to a common problem in Digital image processing.
Let's say you have an image of a few trees in say jpg format. How would you go about finding the heights of each of these trees? The photo is the only input you have.
I want to know the approaches you have not to code. So it doesn't matter if your answers are vague, or non DIP-ish.
Small correction :
The height need not be the actual height of the tree. The height can be taken to any scale. But should be consistent to all objects in the pic.
Yes it is possible. What you are describing has an entire industry around it, called Photogrammetry
There is a fair amount of computer vision research in this area. Assuming you don't know the camera constraints, you'll have to make assumptions about the scene and camera to determine the heights up to a scale factor. Note that without camera constraints or a reference height in the image it is impossible to tell the difference between a tall tree photographed from a distance or a short tree photographed up close. A great start is the Single View Metrology work by Criminisi.
It is simple to find the size of an object from images using Photogrammetry.
Photogrammetry is the science of making measurements from photographs.
For this we need to know two things,
the distance between the camera and the image plane(distance from camera to object).
Focal-length(in mm and pixels per mm) or physical size of the image sensor.
Following are the steps:
Calibrate the Camera
Use openCV to calibrate the camera.You can use the OpenCV calibrate.py tool and the Chessboard pattern PNG provided in the source code to generate a calibration matrix. Camera calibration is done to find the camera parameters. I took about a dozen of photos of the chessboard photos from many angles as I could with my webcam (to calibrate my webcam). For more details check openCV camera calibration.
We will get f_x,f_y,c_x,c_y from calibration matrix.
Checking the details of the photos you took, you will find the native resolution of the photos(heightXwidth) and in their EXIF headers you can find the focal length value(f). These items may vary depending on your camera.
Pixels per millimeter
We need to know the pixels per millimeter(px/mm) on the image sensor.
f_x=f*m_x
f_y=f*m_y
Since we have two of the variables for each formula we can solve for m_x and m_y.I just averaged f_x and f_y to get f_xy.
m=f_xy/focal_length_of_camera
Insert the image
Insert your image from which you need to find the actual size of image. You should know the distance between object and camera. Find the dimension of the image (height1Xwidth1)
Find the Object size in pixels
Determine the size of object in pixels. I simply use distance formula to find length of a selected line. You can adopt any other method.
Convert px/mm in the lower resolution
pxpermm_in_lower_resolution = (width1*m)/width
Size of object in the image sensor
size_of_object_in_image_sensor = object_size_in_pixels/(pxpermm_in_lower_resolution)
Actual size of object
The actual size of object can be found with the above data as,
real_size = (dist*size_of_object_in_image_sensor)/focal_length
Assuming they're all the same distance away, all to scale, you'd want to find a single unit of measurement you can guarantee. For example, if there's a person in the photo, again, same scale, and you know they're exactly 6 feet tall, you use that as your measure. You then take that, and count how many stacked make the tree. For example, if you need 3.5 of this person, then:
3.5 * 6 = 21
gives you a 21 foot tall tree.
Without a single point of reference for everything, or if they're all on different scales, you would need a lot more information than you could easily get without having been there.
I would rely on an object of known dimensions to be present in the picture. For instance, a man.
Or perhaps, we could use the EXIF data to reverse engineer the size of the object based on the camera's sensor dimensions, the lens and the focal length used. This again depends on the angle. We should be getting most accurate results when the camera has been held perpendicular to the subject.
If your image is 3*3 and you want to find out the size of image (i.e 3x3..so 3x3 = 9) now we have 8 pixels starting from 0 up to 8. So 9/8=(___)kb.
If you want to find the size of image in MB, like doing above example, just do like that (9/8)/(1024)=(----)MB..
So you will get the result in Mb.

Resources