The perimeter around a circle gets pixelated when scaling down the image.
The embedded circle image has a radius of 100 pixels. (The circle is white so click around the blank space, and you'll get the image.) Scaling down using SpriteKit causes the border to get very blurry and pixelated. How to scale up/down and preserve sharp borders in SpriteKit? The goal is to use a base image for a circle and create circle images of different sizes with this one base image.
// Create dot
let dot = SKSpriteNode(imageNamed: "dot50")
// Position dot
dot.position = scenePoint
// Size dot
let scale = radius / MasterDotRadius
println("Dot size and scale: \(radius) and \(scale)")
dot.setScale(scale)
dot.texture!.filteringMode = .Nearest
It seems you should use SKTextureFilteringLinear instead of SKTextureFilteringNearest:
SKTextureFilteringNearest:
Each pixel is drawn using the nearest point in the texture. This mode
is faster, but the results are often pixelated.
SKTextureFilteringLinear:
Each pixel is drawn by using a linear filter of multiple texels in the
texture. This mode produces higher quality results but may be slower.
You can use SKShapeNode which will act better while scale animation, but end result (when dot is scaled to some value) will be almost pixelated as when using SKSpriteNode and image.
Related
So as you can see, the texture of the photoFrame is a square image. But when I set it to the diffuse contents, the effect is terrible. So how can I display the square image in the rectangle frame but not stretch the image.
A lot of what you see depends on what geometry the texture is mapped onto. Assuming those picture frames are SCNPlane or SCNBox geometries, the face of the frame has texture coordinates ranging from (0,0) in the upper left to (1,1) in the lower right, regardless of the geometry's dimensions or aspect ratio.
SceneKit texture maps images such that the top left of the image is at texture coordinate (0,0) and the lower right is at (1,1) regardless of the pixel dimensions of the image. So, unless you have a geometry whose aspect ratio matches that of the texture image, you're going to see cases like this where the image gets stretched.
There are a couple of things you can do to "fix" your texture:
Know (or calculate) the aspect ratios of your image and the geometry (face) you want to put it on, then use the material's contentsTransform to correct the image.
For example, if you have an SCNPlane whose width is 2 and height is 1, and you assign a square image to it, the image will get stretched horizontally. If you set the contentsTransform to a matrix created with SCNMatrix4MakeScale(1,2,1) it'll double the texture coordinates in the horizontal direction, effectively scaling the image in half in that direction, "fixing" the aspect ratio for your 2:1 plane. Note that you might also need a translation, depending on where you want your half-width image to appear on the face of the geometry.
If you're doing this in the scene editor in Xcode, contentsTransform is the "offset", "scale", and "rotation" controls in the material editor, down below where you assigned an image in your screenshot.
Know (or calculate) the aspect ratio of your geometry, and at least some information about the size of your image, and create a modified texture image to fit.
For example, if you have a 2:1 plane as above, and you want to put 320x480 image on it, create a new texture image with dimensions of 960x480 — that is, matching the aspect ratio of the plane. You can use this image to create whatever style of background you want, with your 320x480 image composited on top of that background at whatever position you want.
I change the scale and offset and WrapT property in the material editor. And the effect is good. But when I run it, I couldn't get the same effect. So I try to program by change the contentsTransform property. But the scale, offset they both affect the contentsTransform. So if the offSet is (0, -4.03) and the Scale is (1, 1,714), what is the contentsTransform?
How to draw a texture line using SKShapenode in SpriteKit?
For example, how to draw chalk-like textured line on touch move?
Is the following method correct?
[lineNode setStrokeTexture:[SKTexture textureWithImageNamed:#"texture.png"]];
But it shows nothing and the line is empty.
A possible solution for your task is to use SKCropNode with the line node set as the mask node of the crop node and a texture node added as a child to the crop node. Keep in mind however, that SKCropNode does not use alpha value of the mask image pixels to "smoothly" mask out the target image. It just checks if the alpha of the mask image is greater than 0.05 and if so, displays the corresponding target image pixel and if not, it completely masks out the pixel. So the result may be somewhat pixelated.
I create a SKSpriteNode just with a fill color and a size, then I rotate it:
SKSpriteNode *myNode = [SKSpriteNode spriteNodeWithColor:[SKColor grayColor] size:CGSizeMake(100, 100)];
myNode.zRotation = 0.2 * M_PI;
How do I enable anti-aliasing for my SKSpriteNode? Right now, the edges of the gray square look jagged.
What I already found out: When I create a gray 100x100px PNG and use spriteNodeWithImageNamed:, the edges look jagged, too. If I add a 1px transparent border around the gray square PNG, the edges look smooth. (Since the jagged edges are now transparent.)
Thank you in advance for your help!
I know this question is very old, but I have a simple solution. Maybe is your image that do not scale well. Vector images, with large curve surfaces, present a challenge to resize.
What I do is:
1) If you import a vector file, be sure to transform it to PNG, using a very large DPI (360), and size;
2) Open the PNG using Gimp;
3) Apply a Gaussian blur with factor 11 (or greater).
Now, use your image as a Texture. If it is still jagged, then set usesMipmaps to true.
I am developing face features detection in my project.
Heretofore i have developed detecting the face, then finding the eyes within the face.
I want to crop the eyes which are in circular .
circle( mask, center, radius, cv::Scalar(255,255,255), -1, 8, 0 );
image.copyTo( dst, mask );
Here in the above code , I am able to Mask image with black color leaving eye region. now I am want to crop only the Eye region.
Can anybody help me out on this issue.Please check below image
Cropping, by definition, means cutting an axis aligned rectangle from a larger image, leaving a smaller image.
If you want to "crop" a non-axis-aligned rectangle, you will have to use a mask. The mask can be the size of the full image (this is sometimes convenient), or as small and the smallest bounding (axis-aligned) rectangle containing all the pixels you want to leave visible.
This mask can be binary, meaning that it indicates whether or not a pixel is visible, or it can be an alpha-mask which indicated the degree of transparency of any pixel within it, with 0 indicating a non-visible pixel and (for 8-bit mask image) 255 indicating full opacity.
In your example above you can get the sub-image ROI (Region-Of-Interest) like this:
cv::Mat eyeImg = image(cv::Rect(center.x - radius, // ROI x-offset, left coordinate
center.y - radius, // ROI y-offset, top coordinate
2*radius, // ROI width
2*radius)); // ROI height
Note that eyeImg is not a copy, but refers to the same pixels within image. If you want a copy, add a .clone() at the end.
I'm doing some drawing relative to a scaled image so I end up with fractional CGPoints. I am scaling the results from the CoreImage face detection routine.
Do I want to round these myself or leave it to iOS to do it when I use these points in CGPathAddLineToPoint calls? If it is better to round, should I round up or down?
I've read about pixel boundaries, etc. but I'm not sure how to apply that here. I am drawing to a CALayer
CGPoint leftEye = CGPointMake((leftEyePosition.x * xScale),
(leftEyePosition.y * yScale));
// result
features {
faceRect = "{{92, 144.469}, {166.667, 179.688}}";
hasLeftEyePosition = 1;
hasMouthPosition = 1;
hasRightEyePosition = 1;
leftEyePosition = "{142.667, 268.812}";
mouthPosition = "{176, 189.75}";
rightEyePosition = "{207.333, 269.531}";
}
Whether or not you round, and in what direction, depends on the effect you are trying to accomplish.
CoreGraphics itself has absolutely no problem with fractional coordinates. However, drawing anything using fractional coordinates is going to end up antialiasing the drawn objects. This typically causes them to look fuzzy. Rounding your coordinates appropriately is a good idea to avoid this.
Be warned, however. Depending on what you're drawing and how, you may want coordinates that are 0.5 pixels off instead of integral coordinates. For example, if you're drawing a line, the line is centered on the coordinate you give. So a 1-pixel line drawn on integral coordinates will actually end up being a fuzzy line 2 pixels wide (with each pixel accounting for half of the line). The simplest thing to remember is that strokes are centered on the coordinates, but fills are bounded by them. So when filling a rectangle, integral coordinates is best. When stroking a rectangle, inset your coordinates by 0.5 pixels (or, rather, by half of the stroke width you want to use).
Also, don't forget that when drawing an image that's meant to be displayed on a retina screen with scale=2, coordinates that are 0.5 units off are actually still on pixel boundaries. So if you know it's retina, you can avoid rounding to fully integral coordinates when the nearest half-unit coordinate is fine.