Input
Output:
Given an UIImage with transparent background, I want to calculate bezier path for image excluding transparent pixels ? (like in output image dotted path you can see)
Is there any best way to achieve this?
I have one solution:
1. Detect edge using GPUImage with 1.0 precision - image1
2. Detect edge using GPUImage with 2.0 precision - image2
3. image3 = image2 - image1.
4. iterate through each pixels and where ever dark point found just put that coordinate in bezier path.
What could be better solution ?
Related
I have two images, one that is a monochrome one which is a mask and another one with full color. What I need to do is find the CGRect of the mask (white pixels) in the other full color one.
What I did is to first find the contour of the mask using the Vision framework. Now, this returns a CGPath which is normalised. How can I translate this path into coordinates to the other image? Both have been scaled the same way to make them the same size so the translation should be "easy" but I can't figure it out.
I want to cut a renctangular shape out of an image.
I know the position of the rectangle relative to the orginal image.
However: When I rotate/scale/translate the image, the image dimensions changes. So my crop to the rectangle does not work anymore.
Is there a way to preserve the "old" coordinate system while transforming the image and then do a crop based on that old system?
The Image is always rotated/scaled around its center. Translation is applied after scaling and rotation.
I aim for a solution in php with imagick. However: When its not possible I am fine with command line imagemagick, too
The perimeter around a circle gets pixelated when scaling down the image.
The embedded circle image has a radius of 100 pixels. (The circle is white so click around the blank space, and you'll get the image.) Scaling down using SpriteKit causes the border to get very blurry and pixelated. How to scale up/down and preserve sharp borders in SpriteKit? The goal is to use a base image for a circle and create circle images of different sizes with this one base image.
// Create dot
let dot = SKSpriteNode(imageNamed: "dot50")
// Position dot
dot.position = scenePoint
// Size dot
let scale = radius / MasterDotRadius
println("Dot size and scale: \(radius) and \(scale)")
dot.setScale(scale)
dot.texture!.filteringMode = .Nearest
It seems you should use SKTextureFilteringLinear instead of SKTextureFilteringNearest:
SKTextureFilteringNearest:
Each pixel is drawn using the nearest point in the texture. This mode
is faster, but the results are often pixelated.
SKTextureFilteringLinear:
Each pixel is drawn by using a linear filter of multiple texels in the
texture. This mode produces higher quality results but may be slower.
You can use SKShapeNode which will act better while scale animation, but end result (when dot is scaled to some value) will be almost pixelated as when using SKSpriteNode and image.
I'm trying to create a GPUImage filter to determine the bounding box of an image. The process would require the following step: copying the image except for:
pixels on last row is black if at least one of the above pixels is not completely transparent
pixels on last line is black if at least one of the pixels on its left is not completely transparent
pixels on last line is black if it would be according to rule 1. or rule 2.
This would convert image A into image B:
How could I achieve this easily?
GPUImage filters support write shader file for it. You need to write shader that doing provided algorithm. So basically you need to subclass from GPUImageFilter and write shader file. :)
I am trying to write a program in c# or c++, which finds if the image (png or jpg) has a rectangle which is black in color and is greater than size 5 * 5 pixels. If there are multiple such rectangles in an image, it should be able to give me the coordinates of all such rectangles which are black in color.
You could try an Image Correlation: