Effect of fractional CGPoint with Core Graphics? - ios

I'm doing some drawing relative to a scaled image so I end up with fractional CGPoints. I am scaling the results from the CoreImage face detection routine.
Do I want to round these myself or leave it to iOS to do it when I use these points in CGPathAddLineToPoint calls? If it is better to round, should I round up or down?
I've read about pixel boundaries, etc. but I'm not sure how to apply that here. I am drawing to a CALayer
CGPoint leftEye = CGPointMake((leftEyePosition.x * xScale),
(leftEyePosition.y * yScale));
// result
features {
faceRect = "{{92, 144.469}, {166.667, 179.688}}";
hasLeftEyePosition = 1;
hasMouthPosition = 1;
hasRightEyePosition = 1;
leftEyePosition = "{142.667, 268.812}";
mouthPosition = "{176, 189.75}";
rightEyePosition = "{207.333, 269.531}";
}

Whether or not you round, and in what direction, depends on the effect you are trying to accomplish.
CoreGraphics itself has absolutely no problem with fractional coordinates. However, drawing anything using fractional coordinates is going to end up antialiasing the drawn objects. This typically causes them to look fuzzy. Rounding your coordinates appropriately is a good idea to avoid this.
Be warned, however. Depending on what you're drawing and how, you may want coordinates that are 0.5 pixels off instead of integral coordinates. For example, if you're drawing a line, the line is centered on the coordinate you give. So a 1-pixel line drawn on integral coordinates will actually end up being a fuzzy line 2 pixels wide (with each pixel accounting for half of the line). The simplest thing to remember is that strokes are centered on the coordinates, but fills are bounded by them. So when filling a rectangle, integral coordinates is best. When stroking a rectangle, inset your coordinates by 0.5 pixels (or, rather, by half of the stroke width you want to use).
Also, don't forget that when drawing an image that's meant to be displayed on a retina screen with scale=2, coordinates that are 0.5 units off are actually still on pixel boundaries. So if you know it's retina, you can avoid rounding to fully integral coordinates when the nearest half-unit coordinate is fine.

Related

Pixelated circles when scaling with SKSpriteNode

The perimeter around a circle gets pixelated when scaling down the image.
The embedded circle image has a radius of 100 pixels. (The circle is white so click around the blank space, and you'll get the image.) Scaling down using SpriteKit causes the border to get very blurry and pixelated. How to scale up/down and preserve sharp borders in SpriteKit? The goal is to use a base image for a circle and create circle images of different sizes with this one base image.
// Create dot
let dot = SKSpriteNode(imageNamed: "dot50")
// Position dot
dot.position = scenePoint
// Size dot
let scale = radius / MasterDotRadius
println("Dot size and scale: \(radius) and \(scale)")
dot.setScale(scale)
dot.texture!.filteringMode = .Nearest
It seems you should use SKTextureFilteringLinear instead of SKTextureFilteringNearest:
SKTextureFilteringNearest:
Each pixel is drawn using the nearest point in the texture. This mode
is faster, but the results are often pixelated.
SKTextureFilteringLinear:
Each pixel is drawn by using a linear filter of multiple texels in the
texture. This mode produces higher quality results but may be slower.
You can use SKShapeNode which will act better while scale animation, but end result (when dot is scaled to some value) will be almost pixelated as when using SKSpriteNode and image.

Coordinate length to pixel length mapping in ImageCanvas

I want to draw circles in an image canvas. I'm able to get pixel values from coordinate values by calling map.coordinateToPixel.
For radius, how can I map a coordinate distance to pixel length?
For instance, if my radius is 60 arc minutes, it goes from 50 degrees to 51 degrees. In a vector layer, the underlying framework manages the translation to pixels depending on the zoom level. However, for an ImageCanvas, I need to specify that myself. Is there a method to do that? I know I might have to dig into the code, but I was wondering if there's an inherent solution somebody already knows of.
An alternate option I've considered is:
Get the coordinate at pixel (0,0)
Get the coordinate at (radiusLogitude, 0)
Find the diff between the #2 - #1 on the Longitude and use that as my radius
Maybe this example can help you: http://acanimal.github.io/thebookofopenlayers3/chapter03_04_imagecanvas.html it draw a set of random pie charts (but without taking into account pixel ratio).
Note, the canvasFunction you use receives five parameters that can help you determine the pixel size: function(extent, resolution, pixelRatio, size, projection)

Keeping Squares Along A Circle's Circumference

I'm drawing squares along a circular path for an iOS application. However, at certain points along the circle, the squares start to go out of the circle's circumference. How do I make sure that the squares stay inside?
Here's an illustration I made. The green squares represent the positions I need the squares to actually be in. The red squares are where they actually appear given the following values for each square's upper-left corner:
x = origin.x + radius * cos(DEGREES_TO_RADIANS(angle));
y = origin.y + radius * sin(DEGREES_TO_RADIANS(angle));
Origin refers to the center of the circle. I have a loop that repeats this for every angle from 1 till 360 degrees.
EDIT: I've changed my design to position the centers of the squares along the circular path rather than their upper left corners.
why not just draw the centers of the squares along a smaller circle inside of the bigger one?
You could do the math to figure out exactly what the radius would have to be to ensure an exact fit, but you could probably trial and error your way there quickly too.
Doing it this way ensures that your objects would end up laid out in an actual circle too, which is not the case if you were merely making sure that one and only one corner of each square touched the larger bounding circle (that would create a slightly octagonal shape instead of a circle)
ryan cumley's answer made me realize how dumb I was all along. I just needed to change each square's anchor point to its center & that solved it. Now every calculated value for x & y would position every square's center exactly on the circular path.
Option 1) You could always find the diameter of the circle and then using Pythagorean Theorem, you could create a square that would fit perfectly within the circle. You could then loop through the square that was just made in the circle to create smaller squares, but I doubt this is what you are aiming for.
Option2) Find out what half of the length of one of the diagonals of the squares should be, and create a ring within the first ring. Then lay down squares at key points (like ever 30 degrees or 15 degrees, etc) along the inner path. Ex: http://i.imgur.com/1XYhoQ0.png
As you can see, the smaller (inner) circle is in the center of each green square, and that ensures that the corners of each square just touches the larger (outer) circle. Obviously my cheaply made picture in paint is not perfect, but mathematically it will work.

Texture Sampling Coordinates to Render a Sprite

Let's say we have a texture (in this case 8x8 pixels) we want to use as a sprite sheet. One of the sub-images (sprite) is a subregion of 4x3 inside the texture, like in this image:
(Normalized texture coordinates of the four corners are shown)
Now, there are basically two ways to assign texture coordinates to a 4px x 3px-sized quad so that it effectively becomes the sprite we are looking for; The first and most straightforward is to sample the texture at the corners of the subregion:
// Texture coordinates
GLfloat sMin = (xIndex0 ) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth ) / imageWidth;
GLfloat tMin = (yIndex0 ) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight) / imageHeight;
Although when first implementing this method, ca. 2010, I realized the sprites looked slightly 'distorted'. After a bit of search, I came across a post in the cocos2d forums explaining that the 'right way' to sample a texture when rendering a sprite is this:
// Texture coordinates
GLfloat sMin = (xIndex0 + 0.5) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth - 0.5) / imageWidth;
GLfloat tMin = (yIndex0 + 0.5) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight - 0.5) / imageHeight;
...and after fixing my code, I was happy for a while. But somewhere along the way, and I believe it is around the introduction of iOS 5, I started feeling that my sprites weren't looking good. After some testing, I switched back to the 'blue' method (second image) and now they seem to look good, but not always.
Am I going crazy, or something changed with iOS 5 related to GL ES texture mapping? Perhaps I am doing something else wrong? (e.g., the vertex position coordinates are slightly off? Wrong texture setup parameters?) But my code base didn't change, so perhaps I am doing something wrong from the beginning...?
I mean, at least with my code, it feels as if the "red" method used to be correct but now the "blue" method gives better results.
Right now, my game looks OK, but I feel there is something half-wrong that I must fix sooner or later...
Any ideas / experiences / opinions?
ADDENDUM
To render the sprite above, I would draw a quad measuring 4x3 in orthographic projection, with each vertex assigned the texture coords implied in the code mentioned before, like this:
// Top-Left Vertex
{ sMin, tMin };
// Bottom-Left Vertex
{ sMin, tMax };
// Top-Right Vertex
{ sMax, tMin };
// Bottom-right Vertex
{ sMax, tMax };
The original quad is created from (-0.5, -0.5) to (+0.5, +0.5); i.e. it is a unit square at the center of the screen, then scaled to the size of the subregion (in this case, 4x3), and its center positioned at integer (x,y) coordinates. I smell this has something to do too, especially when either width, height or both are not even?
ADDENDUM 2
I also found this article, but I'm still trying to put it together (it's 4:00 AM here)
http://www.mindcontrol.org/~hplus/graphics/opengl-pixel-perfect.html
There's slightly more to this picture than meets the eye, the texture coordinates are not the only factor in where the texture gets sampled. In your case I believe the blue is probably what want to have.
What you ultimately want is to sample each texel in center. You don't want to be taking samples on the boundary between two texels, because that either combines them with linear sampling, or arbitrarily chooses one or the other with nearest, depending on which way the floating point calculations round.
Having said that, you might think that you don't want to have your texcoords at (0,0), (1,1) and the other corners, because those are on the texel boundary. However an important thing to note is that opengl samples textures in the center of a fragment.
For a super simple example, consider a 2 by 2 pixel monitor, with a 2 by 2 pixel texture.
If you draw a quad from (0,0) to (2,2), this will cover 4 pixels. If you texture map this quad, it will need to take 4 samples from the texture.
If your texture coordinates go from 0 to 1, then opengl will interpolate this and sample from the center of each pixel, with the lower left texcoord starting at the bottom left corner of the bottom left pixel. This will ultimately generate texcoord pairs of (0.25, 0.25), (0.75,0.75), (0.25, 0.75), and (0.75, 0.25). Which puts the samples right in the middle of each texel, which is what you want.
If you offset your texcoords by a half pixel as in the red example, then it will interpolate incorrectly, and you'll end up sampling the texture off center of the texels.
So long story short, you want to make sure that your pixels line up correctly with your texels (don't draw sprites at non-integer pixel locations), and don't scale sprites by arbitrary amounts.
If the blue square is giving you bad results, can you give an example image, or describe how you're drawing it?
Picture says 1000 words:

How to distort a Sprite into a trapezoid?

I am trying to transform a Sprite into a trapezoid, I don't really care about the interpolation even though I know without it my image will lose detail. All I really want to do is Transform my rectangular Sprite into a trapezoid like this:
/ \
/ \
/__________\
Has anyone done this with CGAffineTransforms or with cocos2d?
The transformation you're proposing is not affine. Affine transformations have to be undoable. So they can typically:
Scale
Rotate
Shear (make lopsided, like square -> parallelogram)
Translate
But they cannot "squeeze". Notice that the left and right sides of your trapezoid, if extended, would intersect at a particular spot. They're not parallel anymore. So you couldn't "undo" the transformation, because if there was anything to transform at that spot, you couldn't decide where they would transform to. In other words, if a transformation doesn't preserve parallelism, it pinches space, can't be undone, and isn't affine.
I don't know that much about transformations in Core Animation, so I hope that mathy stuff helps you find an alternative.
But I do know how you could do it in OpenGL, but it would require you to start over on how you draw your application:
If I'm envisioning the result you want correctly, you want to build your rectangle in 3D, use an affine transformation to rotate it away a little bit, and use a (non-affine) projection transformation to flatten it into a 2D image.
If you're not looking for a 3D effect, but you really just want to pinch in the corners, then you can specify a GL_RECT with the points of your trapezoid and map your sprite onto it as a texture.
The easiest thing might be to pre-squeeze your image in a photo editor, save it as a .png with transparency, and draw a rectangle with that image.
You need to apply a CATransform3D to the layer of the UIView.
To find out the right one, it is easier to use AGGeometryKit.
#import <AGGeometryKit/AGGeometryKit.h>
UIView *view = ...; // create a view
view.layer.anchorPoint = CGPointZero;
AGKQuad quad = view.layer.quadrilateral;
quad.tl.x += 20; // shift top left x-value with 20 pixels
quad.tr.x -= 20; // shift top right x-value with 20 pixels
view.layer.quadrilateral = quad; // the quad is converted to CATransform3D and applied

Resources