Sprite Kit: SKNode zRotation + Anti-Aliasing - ios

I create a SKSpriteNode just with a fill color and a size, then I rotate it:
SKSpriteNode *myNode = [SKSpriteNode spriteNodeWithColor:[SKColor grayColor] size:CGSizeMake(100, 100)];
myNode.zRotation = 0.2 * M_PI;
How do I enable anti-aliasing for my SKSpriteNode? Right now, the edges of the gray square look jagged.
What I already found out: When I create a gray 100x100px PNG and use spriteNodeWithImageNamed:, the edges look jagged, too. If I add a 1px transparent border around the gray square PNG, the edges look smooth. (Since the jagged edges are now transparent.)
Thank you in advance for your help!

I know this question is very old, but I have a simple solution. Maybe is your image that do not scale well. Vector images, with large curve surfaces, present a challenge to resize.
What I do is:
1) If you import a vector file, be sure to transform it to PNG, using a very large DPI (360), and size;
2) Open the PNG using Gimp;
3) Apply a Gaussian blur with factor 11 (or greater).
Now, use your image as a Texture. If it is still jagged, then set usesMipmaps to true.

Related

UIBezierPath from UIImage removing transparent pixels

Input
Output:
Given an UIImage with transparent background, I want to calculate bezier path for image excluding transparent pixels ? (like in output image dotted path you can see)
Is there any best way to achieve this?
I have one solution:
1. Detect edge using GPUImage with 1.0 precision - image1
2. Detect edge using GPUImage with 2.0 precision - image2
3. image3 = image2 - image1.
4. iterate through each pixels and where ever dark point found just put that coordinate in bezier path.
What could be better solution ?

Object detection by color in openCV

I have a simple colorful image taken by camera, and I need to detect some 'Red' circles inside of it very accurate.Circles have different radius and they should be distinguishable. There are some black circles in the photo also.
Here is the procedure I followed:
1 - Convert from RGB to HSV
2 - Determining "red" upper and lower band:
lower_red = np.array([100, 50, 50])
upper_red = np.array([179, 255, 255])
3 - Create a mask.
4 - Applying cv2.GaussianBlur to smoothing the mask and noise reduction.
5 - Detecting remaining circles by using 'cv2.HoughCircles' on 'Mask' functions with different radius. (I have radius range)
Problem: When I create mask, the quality is not good enough, therefore Circles are detected wrong according to their radius.
Attachments include main photo, mask, and detected circles.
Anybody can help to set all pixels to black appart red pixels. Or in the other words, creating a high quality mask.

Pixelated circles when scaling with SKSpriteNode

The perimeter around a circle gets pixelated when scaling down the image.
The embedded circle image has a radius of 100 pixels. (The circle is white so click around the blank space, and you'll get the image.) Scaling down using SpriteKit causes the border to get very blurry and pixelated. How to scale up/down and preserve sharp borders in SpriteKit? The goal is to use a base image for a circle and create circle images of different sizes with this one base image.
// Create dot
let dot = SKSpriteNode(imageNamed: "dot50")
// Position dot
dot.position = scenePoint
// Size dot
let scale = radius / MasterDotRadius
println("Dot size and scale: \(radius) and \(scale)")
dot.setScale(scale)
dot.texture!.filteringMode = .Nearest
It seems you should use SKTextureFilteringLinear instead of SKTextureFilteringNearest:
SKTextureFilteringNearest:
Each pixel is drawn using the nearest point in the texture. This mode
is faster, but the results are often pixelated.
SKTextureFilteringLinear:
Each pixel is drawn by using a linear filter of multiple texels in the
texture. This mode produces higher quality results but may be slower.
You can use SKShapeNode which will act better while scale animation, but end result (when dot is scaled to some value) will be almost pixelated as when using SKSpriteNode and image.

Retrieve Circle Points

In OpenCV, i know how to draw circles, but is there a way to get back all the points that makeup the circle? I hope I dont have to go through calculating contours.
Thanks
If you know how to draw the circle,
Create a black image with the same size of as that of original image
Then draw the circle on the black image with white color
Now in this black image, check the points which are white
If you are using Python API, you can do as follows :
import numpy as np
import cv2
img = np.zeros((500,500),np.uint8)
cv2.circle(img,(250,250),100,255)
points = np.transpose(np.where(img==255))
You can do similar thing to the answer implemented in python in C/C++
If you know how to draw the circle,
Create a black image with the same size of as that of original image
Then draw the circle on the black image with white color
Now instead of checking which pixels have certain value you can find a contour (represented as vector of points) of the circle's edge.
To do this you can use OpenCV's findContours function which will give you the points on the circles edge.
Actually the background doesn't have to be black and the circle white, but the background should be plain and the circle should have different color than background.

How to distort a Sprite into a trapezoid?

I am trying to transform a Sprite into a trapezoid, I don't really care about the interpolation even though I know without it my image will lose detail. All I really want to do is Transform my rectangular Sprite into a trapezoid like this:
/ \
/ \
/__________\
Has anyone done this with CGAffineTransforms or with cocos2d?
The transformation you're proposing is not affine. Affine transformations have to be undoable. So they can typically:
Scale
Rotate
Shear (make lopsided, like square -> parallelogram)
Translate
But they cannot "squeeze". Notice that the left and right sides of your trapezoid, if extended, would intersect at a particular spot. They're not parallel anymore. So you couldn't "undo" the transformation, because if there was anything to transform at that spot, you couldn't decide where they would transform to. In other words, if a transformation doesn't preserve parallelism, it pinches space, can't be undone, and isn't affine.
I don't know that much about transformations in Core Animation, so I hope that mathy stuff helps you find an alternative.
But I do know how you could do it in OpenGL, but it would require you to start over on how you draw your application:
If I'm envisioning the result you want correctly, you want to build your rectangle in 3D, use an affine transformation to rotate it away a little bit, and use a (non-affine) projection transformation to flatten it into a 2D image.
If you're not looking for a 3D effect, but you really just want to pinch in the corners, then you can specify a GL_RECT with the points of your trapezoid and map your sprite onto it as a texture.
The easiest thing might be to pre-squeeze your image in a photo editor, save it as a .png with transparency, and draw a rectangle with that image.
You need to apply a CATransform3D to the layer of the UIView.
To find out the right one, it is easier to use AGGeometryKit.
#import <AGGeometryKit/AGGeometryKit.h>
UIView *view = ...; // create a view
view.layer.anchorPoint = CGPointZero;
AGKQuad quad = view.layer.quadrilateral;
quad.tl.x += 20; // shift top left x-value with 20 pixels
quad.tr.x -= 20; // shift top right x-value with 20 pixels
view.layer.quadrilateral = quad; // the quad is converted to CATransform3D and applied

Resources