GIMP how to select from merged path - gimp

I have the path, merge from 8 subpath (because it require symmetry, I create 1/8 then merge them together)
I wanna select the picture inside the path, that is... impossible. Are there any ways to:
re-order the points inside the path
or
manual set the points position (x and y coordinate)
or
unclose any subpath (I hate this, hate to do drawing the paths over again)
or
anyway it select inside the path smarter?
thank you all.

The best you can do is splice the strokes so that you have a truly closed stroke from which you get a selection.
The ofn-path-edits plugin has a Join strokes function to connect together strokes whose end points are close enough.
But you'll have to redo part of your work: use the Join strokes function to create the closed shapes, then replicate+shift these closed shapes. And yes, this means that between the shapes, there are overlapping strokes, one for the shape on the left/top and one for the shape on the right/bottom.
At the same place there is also a path-mirror script that can create the symmetry of a stroke, and connect the ends that are on the symmetry axis.

Related

How to get the edge index in ABAQUS

I create a cube and then create a face partition with sketch on the top face of the cube in the script. I want to get all the index of edges which I create in the sketch. Anyone know what should I do?
You can make use of getByBoundingBox(...) command. You can control the entity to select by optional arguments viz. xMin, yMin,zMin,xMax,yMax,zMax. These arguments creates an imaginary cube and selects the entity inside it.
edgs = mdb.models['Model-1'].parts['Part-1'].edges.getByBoundingBox(xMin=0.0,
yMin=0.0,zMin=0.0,xMax1.0,yMax=1.0,zMax=1.0)
Above commend selects the edges within the cube bounded by (0,0,0) and (1,1,1) points.
Further to get the index of an edge, you can make use of index attribute.
for edg in edgs:
print(edg.index)

CGPath copy lineJoin and miterLimit has no apparent affect

I am offsetting a CGPath using copy(strokingWithWidth:lineCap:lineJoin:miterLimit:transform‌​:). The problem is the offset path introduces all kinds of jagged lines that seem to be the result of a miter join. Changing the miterLimit to 0 has no effect, and using a bevel line join also makes no difference.
In this image there is the original path (before applying strokingWithWidth), an offset path using miter join, and an offset path using bevel join. Why doesn't using bevel join have any affect?
Code using miter (Note that using CGLineJoin.round produces identical results):
let pathOffset = path.copy(strokingWithWidth: 4.0,
lineCap: CGLineCap.butt,
lineJoin: CGLineJoin.miter,
miterLimit: 20.0)
context.saveGState()
context.setStrokeColor(UIColor.red.cgColor)
context.addPath(pathOffset)
context.strokePath()
context.restoreGState()
Code using bevel:
let pathOffset = path.copy(strokingWithWidth: 4.0,
lineCap: CGLineCap.butt,
lineJoin: CGLineJoin.bevel,
miterLimit: 0.0)
context.saveGState()
context.setStrokeColor(UIColor.red.cgColor)
context.addPath(pathOffset)
context.strokePath()
context.restoreGState()
Here is a path consisting of two line segments:
Here's what it looks like if I stroke it with bevel joins at a line width of 30:
If I make a stroked copy of the path with the same parameters, the stroked copy looks like this:
Notice that triangle in there? That appears because Core Graphics creates the stroked copy in a simple way: it traces along the each segment of the original path, creating a copied segment that is offset by 15 points. It joins each of these copied segments with straight lines (because I specified bevel joins). In slow motion, the copy operation looks like this:
So on the inside of the joint, we get a triangle, and on the outside, we get the flat bevel.
When Core Graphics strokes the original path, that triangle is harmless, because Core Graphics uses the non-zero winding rule to fill the stroke. But when you stroke the stroked copy, the triangle becomes visible.
Now, if I scale down the line width used when I make the stroked copy, the triangle becomes smaller. And if I then increase the line width used to draw the stroked copy, and draw the stroked copy with mitered joins, the triangle can actually end up looking like it's filled in:
Now, suppose I replace that single joint in the original path with two joints connected by a very short line, creating a (very small) flat spot on the bottom:
When I make a stroked copy of this path, the copy has two internal triangles, and if I stroke the stroked copy, it looks like this:
So that's where those weird shapes star shapes come from when you make a stroked copy of your paths: very short segments creating overlapping triangles.
Note that I made my copies with bevel joins. Using miter joins when making the copy also creates the hidden triangles, because the choice of join only affects the outside of the joint, not the inside of the joint.
However, the choice of join does matter when stroking the stroked copy, because the use of miter joins makes the stars larger. See this document for a good illustration of how much the join style can affect the appearance of an acute angle.
So the miter joins make the triangles' points stick out quite far, which makes the overlapping triangles look like a star. Here's the result if I stroke the stroked copy using bevel joins instead:
The star is nigh-invisible here because the triangles are drawn with blunted corners.
If the inner triangles are unacceptable to you, you will have to write your own function (or find one on the Internet) to make a stroked copy of the path without the triangles, or to eliminate the triangles from the copy.
If your path consists entirely of flat segments, the easiest solution is probably to use an existing polygon-clipping library. The “union” operation, applied to the stroked copy, should eliminate the inner triangles. See this answer for example. Note that these libraries tend to be written in C++, so you'll probably have to write some Objective-C++ code since Swift cannot call C++ code directly.
In case you're wondering how I generated the graphics for this answer, I did it using this Swift playground.

How to remove non-periodic lines from binary image

Example Image
I want to remove the lines (shown in RED color) as they are out of order. Lines shown in black color are repeating at same period (approximately). Period is not known beforehand. Is there any way of deleting non-periodic lines( shown in red color) automatically?
NOTE: Image is binary ( back & while).. lines shown in red color only for illustration.
Of course there is any way. There is almost always some way to do something.
Infortunately you have not provided any particular problem. The entire thing is too broad to be answered here.
To help you getting started: (I highly recommend you start with pen, paper and your brain)
Detect the lines -> google or think, there are many standard ways to detect lines in an image. if you don't have noise in your binary image its trivial.
find any aequidistant sets -> think
delete the rest -> think ( you know what is good so everything else has to go away)
I assume, your lines are (almost) vertical.
The following should work
turn the image to a column sum histogram
try a Fourier transformation on the signal (potentially padding the image appropriately)
pick the maximum/peak from the Fourier spectrum as your base period
If you need the lines rather than the position of the lines, generate a mask with lines at appropriate intervals (as determined by your analysis before) and apply to the image.

How to find sizes and shapes of Microsoft Powerpoint objects?

I have a slide with some hand-drawn circles on it. I'd like to get a list of the coordinates and radii (sizes) of them. Attached is an image and link. Anyone have an idea how?
I started looking into computer vision techniques, but it seems like there should be a much more direct way.
If you are familiar with openCV the method HoughCircles() will do the job:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
Are you familiar with Matlab? imfindcircles() will do it:
http://www.mathworks.com/help/images/ref/imfindcircles.html
If this is a one time job you can post it as a job for someone else to do it for you for a small fee. Example: https://www.mturk.com/mturk/welcome
If you don't know any programming language and this is a one time job, you can do it manually. You can select each circle in photoshop, count the amount of pixels (and using the formulae of circumference = 2*pi*radius) find the radius. The center of mass of all the pixels will be the center of the circle.
It is a bit tricky to separate overlapping circles but you can do it by hand
I found a suitable method using vector graphics.
Select all the circles in powerpoint, right click and 'save as a picture'. Use .emf (windows metafile) format (this option was only available on my windows machine, not mac).
Open the emf file in inkscape, and save it to an 'svg' format, which is ascii and human readable.
Extract the information from the path commands.
E.g.: Each circle is represented as a path object, with a line:
d="m 36.527169,36.434607 c 0,-9.696733 9.075703,-17.551993 20.274845,-17.551993 11.194626,0 20.270329,7.85526 20.270329,17.551993 0,9.69264 -9.075703,17.552246 -20.270329,17.552246 -11.199142,0 -20.274845,-7.859606 -20.274845,-17.552246"
Here, the (x,y) following the 'm' character is the center of the circle, and the 12 (x,y) pairs following 'c' denote a 4-segment polybezier curve in which pairs 3,6,9,12 are the four compass points. Therefore in the above object, this is not a circle but an ellipse with axes ~ 20.27 and 17.55.

OpenCV: Generating points from image after thinning

I've ran in to an issue concerning generating floating point coordinates from an image.
The original problem is as follows:
the input image is handwritten text. From this I want to generate a set of points (just x,y coordinates) that make up the individual characters.
At first I used findContours in order to generate the points. Since this finds the edges of the characters it first needs to be ran through a thinning algorithm, since I'm not interested in the shape of the characters, only the lines or as in this case, points.
Input:
thinning:
So, I run my input through the thinning algorithm and all is fine, output looks good. Running findContours on this however does not work out so good, it skips a lot of stuff and I end up with something unusable.
The second idea was to generate bounding boxes (with findContours), use these bounding boxes to grab the characters from the thinning process and grab all none-white pixel indices as "points" and offset them by the bounding box position. This generates even worse output, and seems like a bad method.
Horrible code for this:
Mat temp = new Mat(edges, bb);
byte roi_buff[] = new byte[(int) (temp.total() * temp.channels())];
temp.get(0, 0, roi_buff);
int COLS = temp.cols();
List<Point> preArrayList = new ArrayList<Point>();
for(int i = 0; i < roi_buff.length; i++)
{
if(roi_buff[i] != 0)
{
Point tempP = bb.tl();
tempP.x += i%COLS;
tempP.y += i/COLS;
preArrayList.add(tempP);
}
}
Is there any alternatives or am I overlooking something?
UPDATE:
I overlooked the fact that I need the points (pixels) to be ordered. In the method above I simply do scanline approach to grabbing all the pixels. If you look at the 'o' for example, it would grab first the point on the left hand side, then the one on the right hand side. I would need them to be ordered by their neighbouring pixels since I want to draw paths with the points later on (outside of opencv).
Is this possible?
You should look into implementing your own connected components labelling. The concept is very simple: you scan the first line and assign unique labels to each horizontally connected strip of pixels. You basically check for every pixel if it is connected to its left neighbour and assign it either that neighbour's label or a new label. In the second row you do the same, but you also check against the pixels above it. Sometimes you need a label merge: two strips that were not connected in the previous row are joined in the current row. The way to deal with this is either to keep a list of label equivalences or use pointers to labels (so you can easily do a complete label change for an object).
This is basically what findContours does, but if you implement it yourself you have the freedom to go for 8-connectedness and even bridge a single-pixel or two-pixel gap. That way you get "almost-connected components labelling". It looks like you need this for the "w" in your example picture.
Once you have the image labelled this way, you can push all the pixels of a single label to a vector, and order them something like this. Find the top left pixel, push it to a new vector and erase it from the original vector. Now find the pixel in the original vector closest to it, push it to the new vector and erase from the original. Continue until all pixels have been transferred.
It will not be very fast this way, but it should be a start.

Resources