Wavy text inside of UIBezierPath in iOS - ios

I saw a trekking app on Android which built routes on selected criteria. It also could built routes for provided gpx-files. All those routes were very wavy. And above each highlighted route I could see its name — also as a very wavy text moving along with the highlighted path, repeating all the curls and waves. I wonder how it is possible to create the same effect in iOS.
What I have is a gpx-file. Shortly speaking, it's just a very long array of tuples:
typealias Coordinates = (Double, Double) // x and y
let points: [Coordinates] = [ (120, 120), (130, 135), (135, 125), (138, 122) ]
Coordinates are represented by pixels, and I use catmull rom interpolation algorithm to build UIBezierPath with smooth rounded corners.
I can draw a wavy text by changing angles for each letter in a playground. But calculating all that transformation stuff for an array of pixels looks too complicated.
Probably there's a better solution?

Related

What do the coordinates mean in love.graphics.polygon

I don't know which numbers do what in the coordinates example here. I imagine they mean things like place the top left corner at this position and the bottom right corner at this position, but I don't know which number corresponds to which position.
I've been trying to fool around with the numbers to get a small green rectangle but keep getting weird results like the following, and don't know which numbers need to be what is order to make the rectangle symmetrical and at the bottom
This is what the rectangle should look like
The height of the rectangle is 50, the height of the screen is 1000, and the width of the screen is 1700.
Here's my draw function
function love.draw()
love.graphics.setColor(0.28, 0.63, 0.05) -- set the drawing color to green for the ground
love.graphics.polygon("fill", objects.ground.body:getWorldPoints(objects.ground.shape:getPoints())) -- draw a "filled in" polygon using the ground's coordinates
-- These are the grounds coordinates. -11650 950 13350 950 13350 1000 -11650 1000
love.graphics.setColor(0.76, 0.18, 0.05) --set the drawing color to red for the ball
love.graphics.circle("fill", objects.ball.body:getX(), objects.ball.body:getY(), objects.ball.shape:getRadius())
love.graphics.setColor(0.20, 0.20, 0.20) -- set the drawing color to grey for the blocks
love.graphics.polygon("fill", objects.block1.body:getWorldPoints(objects.block1.shape:getPoints()))
love.graphics.polygon("fill", objects.block2.body:getWorldPoints(objects.block2.shape:getPoints()))
print(objects.block1.body:getWorldPoints(objects.block1.shape:getPoints()))
end
As described at https://love2d.org/wiki/love.graphics, Löve's coordinate system has (0, 0) at the upper left corner of the screen. X values increase to the right, Y values increase down.
The polygon function expects the drawing mode as it's first parameter, and the the remaining (variable) parameters are the coordinates of the vertices of the polygon you wish to draw. Since you want to draw a rectangle you need four vertices/eight numbers. You do not have to list the upper left corner of the rectangle first, but that's probably the easiest thing to do.
So in your case, you want something like:
love.graphics.polygon('fill', 0, 950, 0, 1000, 1700, 1000, 1700, 950)
I've not worked with the physics system, so I'm not quite sure how it's coordinate system relates to "screen" coordinates. The values you show in the comment in your code listing seem like they should give a rectangle (although x = -11650 wouldn't be on screen). You might try experimenting without the physics system first.
Also, since the physics system in Löve is just a binding to Box2D, you might want to read its documentation (http://box2d.org/about/). Not really sure what you're trying to do with feeding shape:getPoints into body:getWorldPoints.

SCNShape doesn't draw a shape for NSBezierPath

I experienced that for some NSBezierPaths SCNShape seems to be unable to draw a shape.
The path is created only using line(to:).
//...set up scene...
//Create path (working)
let path = NSBezierPath()
path.move(to: CGPoint.zero)
path.line(to: NSMakePoint(0.000000, 0.000000))
path.line(to: NSMakePoint(0.011681, 0.029526))
// more points ...
path.close()
// Make a 3D shape (not working)
let shape = SCNShape(path: path, extrusionDepth: 10)
shape.firstMaterial?.diffuse.contents = NSColor.green
let node = SCNNode(geometry: shape)
root.addChildNode(node)
For verifying that the general process of creating a SCNShape is correct, I also drew a blue shape that only differs by having different points. The blue shape gets drawn, the green shape doesn't.
You can find a playground containing the full example here. In the example you should be able to see a green and a blue shape in assistant editor. But only the blue shape gets drawn.
Do you have any idea why the green shape is not shown?
The short story: your path has way more points than it needs to, leading you to unexpected, hard to find geometric problems.
Note this bit in the documentation:
The result of extruding a self-intersecting path is undefined.
As it turns out, somewhere in the first 8 or so points, your "curve" makes enough of a turn the wrong way that the line closing the path (between the first point in the path 0,0, and the last point 32.366829, 29.713470) intersects the rest of the path. Here's an attempt at making it visible by excluding all but the first few points and the last point from a playground render (see that tiny little zigzag in the bottom left corner):
And at least on some SceneKit versions/renderers, when it tries to make a mesh out of a self-intersecting path it just gives up and makes nothing.
However, you really don't need that many points to make your path look good. Here it is if you use 1x, 1/5x, and 1/10x as many points:
If you exclude enough points overall, and/or skip the few at the beginning that make your curve zag where it should zig, SceneKit renders the shape just fine:
Some tips from diagnosing the problem:
When working with lots of coordinate data like this, I like to use ExpressibleByArrayLiteral so I can easily build an array of lots of points/vectors/etc:
extension CGPoint: ExpressibleByArrayLiteral {
public init(arrayLiteral elements: CGFloat...) {
precondition(elements.count == 2)
self.init(x: elements.first!, y: elements.last!)
}
}
var points: [CGPoint] = [
[0.000000, 0.000000],
[0.011681, 0.029526],
// ...
]
That gets me an array (and a lot less typing out things like NSPointMake over and over), so I can slice and dice the data to figure out what's wrong with it. (For example, one of my early theories was that there might be something about negative coordinates, so I did some map and min() to find the most-negative X and Y values, then some more map to make an array where all points are offset by a constant amount.)
Now, to make paths using arrays of points, I make an extension on NSBezierPath:
extension NSBezierPath {
convenience init(linesBetween points: [CGPoint], stride: Int = 1) {
precondition(points.count > 1)
self.init()
move(to: points.first! )
for i in Swift.stride(from: 1, to: points.count, by: stride) {
line(to: points[i])
}
}
}
With this, I can easily create paths from not just entire arrays of points, but also...
paths that skip parts of the original array (with the stride parameter)
let path5 = NSBezierPath(linesBetween: points, stride: 5)
let path10 = NSBezierPath(linesBetween: points, stride: 10)
(This is handy for generating playground previews a bit more quickly, too.)
paths that use some chunk or slice of the original array
let zigzag = NSBezierPath(linesBetween: Array(points.prefix(to:10)) + [points.last!])
let lopOffBothEnds = NSBezierPath(linesBetween: Array(points[1 ..< points.count < 1]))
Or both... the winning entry (in the screenshot above) is:
let path = NSBezierPath(linesBetween: Array(points.suffix(from: 10)), stride: 5)
You can get a (marginally) better render out of having more points in your path, but an even better way to do it would be to make a path out of curves instead of lines. For extra credit, try extending the NSBezierPath(linesBetween:) initializer above to add curves by keeping every nth point as part of the path while using a couple of the intermediary points as control handles. (It's no general purpose auto trace algorithm, but might be good enough for cases like this.)
In no way does this compare to Rikster's answer, but there is another way to prevent this kind of problem. It's a commercial way, and there's probably freeware apps that do similar, but this is one I'm used to using, that does this quite well.
What is 'this' that I'm talking about?
The conversion of drawings to code, by an app called PaintCode. This will allow you to see your paths and be sure they have none of the conflicts that Rickster pointed out are your issue.
Check it out here: https://www.paintcodeapp.com/
Other options are listed in answers here: How to import/parse SVG into UIBezierpaths, NSBezierpaths, CGPaths?

Wrong result using function fillPoly in opencv for very large images

I have a hard time solving the issue with mask creation.My image is large,
40959px X 24575px and im trying to create a mask for it.
I noticed that i dont have a problem for images up to certain size(I tested about 33000px X 22000px), but for dimensions larger than that i get an error inside my mask(Error is that it gets black in the middle of the polygon and white region extends itself to the left edge.Result should be without black area inside polygon and no white area extending to the left edge of image).
So my code looks like this:
pixel_points_list = latLonToPixel(dataSet, lat_lon_pairs)
print pixel_points_list
# This is the list im getting
#[[213, 6259], [22301, 23608], [25363, 22223], [27477, 23608], [35058, 18433], [12168, 282], [213, 6259]]
image = cv2.imread(in_tmpImgFilePath,-1)
print image.shape
#Value of image.shape: (24575, 40959, 4)
mask = np.zeros(image.shape, dtype=np.uint8)
roi_corners = np.array([pixel_points_list], dtype=np.int32)
print roi_corners
#contents of roi_corners_array:
"""
[[[ 213 6259]
[22301 23608]
[25363 22223]
[27477 23608]
[35058 18433]
[12168 282]
[ 213 6259]]]
"""
channel_count = image.shape[2]
ignore_mask_color = (255,)*channel_count
cv2.fillPoly(mask, roi_corners, ignore_mask_color)
cv2.imwrite("mask.tif",mask)
And this is the mask im getting with those coordinates(minified mask):
You see that in the middle of the mask the mask is mirrored.I took those points from pixel_points_list and drawn them on coordinate system and im getting valid polygon, but when using fillPoly im getting wrong results.
Here is even simpler example where i have only 4(5) points:
roi_corners = array([[ 213 6259]
[22301 23608]
[35058 18433]
[12168 282]
[ 213 6259]])
And i get
Does anyone have a clue why does this happen?
Thanks!
The issue is in the function CollectPolyEdges, called by fillPoly (and drawContours, fillConvexPoly, etc...).
Internally, it's assumed that the point coordinates (of integer type int32) have meaningful values only in the 16 lowest bits. In practice, you can draw correctly only if your points have coordinates up to 32768 (which is exactly the maximum x coordinate you can draw in your image.)
This can't be considered as a bug, since your images are extremely large.
As a workaround, you can try to scale your mask and your points by a given factor, fill the poly on the smaller mask, and then re-scale the mask back to original size
As #DanMašek pointed out in the comments, this is in fact a bug, not fixed, yet.
In the bug discussion, there is another workaround mentioned. It consists on drawing using multiple ROIs with size less than 32768, correcting coordinates for each ROI using the offset parameter in fillPoly.

Edge detection on pool table

I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.

PaintCode - move object on the path

I would like draw a curved line and attach an object to it. Is it possible to create fraction (from 0.0 to 1.0) which makes move my object on the path? When fraction is 0 then object is on the beginning, when 0.5 is on half way and finally when is on 1.0 it is at the end. Of course i want a curved path, not a straight line :) Is it possible to do in PaintCode?
If you need it only as a progress bar, it is possible in PaintCode. The trick is to use dashed stroke with very large Gap and then just change the Dash.
Then just attach a Variable and you are done.
Edit: Regarding the discussion under the original post, this solution uses points as the unit, so it will be distributed equally along the curve, no matter how curved the bezier is.
Based on the fact that you're going to walk along the curve using linear distance, a thing Bezier curves are terrible for, you need to build the linear mapping yourself. That's fairly simple though:
When you draw the curve, also build a look-up table that samples the curve once, at say 100 points (t=0, t=0.01, t=0.02, etc). In pseudocode:
lut = [];
lut[0] = 0;
tlen = curve.length();
for(v=0; v<=100; v++) {
t = v/100;
clen = curve.split(0,t).length();
percent = 100*clen/tlen;
lut[percent] = t;
}
This may leave gaps in your LUT - you can either fix those as a secondary step, or just leave them in and do a binary scan on your array to find the nearest "does have a value" percentage.
Then, when you need to show your progress as some percentage value, you just look up the corresponding t value: say you need to show 83%, you look up lut[83] and draw your object at the value that gives you.

Resources