Background : I need to make Google Ads campaigns that targets a polygon region. I receive this region (which is actually an area where people can access quickly through a way of transportation (e.g : car) a facility we are making a campaign for). Google ads having dropped support for targeting a polygon area we can really only target a group of circles.
So I thought about approximating the polygon with circles (which would be in my understanding "incircle", or "inscribed" circles of this polygon).
A quick and not perfect example, an algorithm would cover as much polygon (the green area) as possible with possibly a parameter to control the maximum number of circles to do so
I found only information about approximating a polygon that is very close to a circle with a single circle, I need multiple ones tho.
Is there any therorical algorithm/any implementation of it, any package or am I mistaken about the idea I have to resolve my challenge about Google Ads geo targeting ?
The only solution I found (who took me 2 days to fully implement) is to create with turfjs a bounding box around my polygon then create a grid of squares (turf.squareGrid) then turn it into a matrix full with either 0 or information about a square. Then a recursive function find the biggest square in this matrix, then the next one etc. until all squares are into equivalent or bigger squares. Then I made circles around these bigger squares (with radius being equal to sqrt(2) (~1.41) * (perimeterOfABigSquare / 2).
Related
I am working on a program that can essentially determine the electrostatic field of some arbitrarily shaped mesh with some surface charge. To test my program I make use of a cube whose left and right faces are oppositely charged.
I use a finite element method (FEM) that discretizes the object's surface into triangles and gives to each triangle 3 integration points (see below figure, bottom-left and -right). To obtain the field I then simply sum over all these points, whilst taking into account some weight factor (because not all triangles have the same size).
In principle this works all fine, until I get too close to a triangle. Since three individual points are not the same as a triangular surface, the program breaks and gives these weird dots. (block spots precisely between two integration points).
Below you see a figure showing the simulation of the field (top left), the discretized surface mesh (bottom left). The picture in the middle depicts what you see when you zoom in on the surface of the cube. The right-most picture shows qualitatively how the integration points are distributed on a triangle.
Because the electric field of one integration point always points away from that point, two neighbouring points will cancel each other out since their vectors aim in the exact opposite direction. Of course what I need instead is that both vectors point away from the surface instead.
I have tried many solutions, mostly around the following points:
Patching the regions near an integration point with a theoretically correct uniform field pointing away from the surface.
Reorienting the vectors only nearby the integration point to manually put them in the right direction.
Apply a sigmoid or other decay function to make the above look more smooth.
Though, none of the methods above allow me to properly connect the nearby and faraway regions.
I guess what might work is some method to extrapolate the correct value from the surroundings. Though, because of the large number of computations, I moved the simulation the my GPU, which means that I have to be careful allowing two pixels to write to each other.
Either way, my question here is as follows:
What would be a good way to smooth out my results? That is, I need a more accurate description of my model when I get closer to a triangle.
As a final note I want to add that it is not my goal to simply obtain a smooth image. Later in the program I need this data to determine the response of a conducting material, which is where these black dots internally become a real pain...
Thank you for your help !!!
I want to scan a sudoku and solve it with an algorithm. I do not only want to to predict if it is a sudoku puzzle and extract the numbers when I'm perfectly above it (so 0 degree rotation towards me, phone is horizontally above sudoku. And 0 degree rotation around the puzzle). I also want to do this if the phone is shifted towards the user (up to 45 degrees) and can be rotated around to object (up to 45 degrees in both directions). Is it possible to detect this? Because I have so many combinations I need to care of. I don't want a concrete solution, I'm just curios if this is implementable. Also, if I was inaccurate with something let me know!
Sure you can do it.
Generally sudoku puzzle has a thick outer border. You can detect it pretty easy (for example on binary version of your input snap but there are many options). If outer border is not thicker than inner ones you may determine which of detected borders is outer and closer to square shape.
When you have puzzle's border than you need to remove its skewness and rotation (and it's common preprocessing step for CV apps detecting various labels, plates and other planar objects) convenient way is to use perspective transformation.
Preference one one or another detection method should be based on input images and possible variance in them. Detecting black-white square-shaped sudoku which is 95% of input snapshot is not same as detecting multiple sudokus (small relative to input snapshot) pinned to black and white brick wall.
I'm looking for a way to detect if a set of points/coordinates have any intersecting lines.
A little setup, I'm drawing a polygon using UIBezierPath on an overlay to a map. This all works. I'm able to reduce the map points down using a point reducing algorithm, and I'm left with a simple looking polygon that renders on my map just fine. FWIW, I'm using Google Maps SDK.
My problem is that it is possible for the user to draw a polygon with self intersecting lines (which is a problem for what I am doing). I need to be able to do one of 3 things.
Remove the intersecting points in the array. (Clip off the bow tie pieces)
Detect if my points have this bow tie (I'll just tell them to redraw a new polygon)
If possible (which I don't think it is), prevent the path from drawing the bow tie in the first place.
I mostly see the bow tie when the polygon self closes and the end point is slightly underlapping the start point. So when the polygon closes and renders into map coordinates on the map, I get a tiny bow tie that messes with an internal API.
Is there anything out there that will work using map coordinates? I've seen some fixes for regular CGPoints, but nothing that will take map coordinates. I would prefer to do this check on my polygon after it has gone through my reducer as it leaves many less points to check. Performance is an issue, and would prefer not to iterate over hundreds of points directly coming off the UIBezierPath. Any help would be appreciated.
I don't know about the Google Maps SDK or the UIBezierPath. I assume that you are given a polygon in the 2D plane and you would like to automatically detect where the polygon intersects itself (if it does).
Perhaps the easiest way to do this is checking all pairs of edges whether they intersect or not. You can check this in O(n2) time where n is the number of edges, as there are n*(n-1)/2 pairs of edges. For a given pair of edges, here are the details how to do it:
How to check if two given line segments intersect?
Nothing extraordinary but the details do require attention.
A more sophisticated algorithm is the plane sweep algorithm:
Line segment intersection, starting at slide 25
Line Segment Intersection Using a Sweep Line Algorithm
I'm looking for an efficient way of selecting a relatively large portion of points (2D Euclidian graph) that are the furthest away from the center. This resembles the convex hull, but would include (many) more points. Further criteria:
The number of points in the selection / set ("K") must be within a specified range. Most likely it won't be very narrow, but it most work for different ranges (eg. 0.01*N < K < 0.05*N as well as 0.1*N < K < 0.2*N).
The algorithm must be able to balance distance from the center and "local density". If there are dense areas near the upper part of the graph range, but sparse areas near the lower part, then the algorithm must make sure to select some points from the lower part even if they are closer to the center than the points in the upper region. (See example below)
Bonus: rather than simple distance from center, taking into account distance to a specific point (or both a point and the center) would be perfect.
My attempts so far have focused on using "pigeon holing" (divide graph into CxR boxes, assign points to boxes based on coordinates) and selecting "outer" boxes until we have sufficient points in the set. However, I haven't been successful at balancing the selection (dense regions over-selected because of fixed box size) nor at using a selected point as reference instead of (only) the center.
I've (poorly) drawn an Example: The red dots are the points, the green shape is an example of what I want (outside the green = selected). For sparse regions, the bounding shape comes closer to the center to find suitable points (but doesn't necessarily find any, if they're too close to the center). The yellow box is an example of what my Pigeon Holing based algorithms does. Even when trying to adjust for sparser regions, it doesn't manage well.
Any and all ideas are welcome!
I don't think there are any standard algorithms that will give you what you want. You're going to have to get creative. Assuming your points are embedded in 2D Euclidean space here are some ideas:
Iteratively compute several convex hulls. For example, compute the convex hull, keep the points that are part of the convex hull, then compute another convex hull ignoring the points from the original convex hull. Continue to do this until you have a sufficient number of points, essentially plucking off points on the perimeter for each iteration. The only problem with this approach is that it will not work well for concavities in your data set (e.g., the one on the bottom of your sample you posted).
Fit a Gaussian to your data and keep everything > N standard
deviations away from the mean (where N is a value that you'd have to
choose). This should work pretty well if your data is Gaussian. If
it isn't, you could always model it with several Gaussians (instead
of one), and keep points with a joint probability less than some threshold. Using multiple Gaussians will probably handle concavities decently. References:
http://en.wikipedia.org/wiki/Gaussian_function
How to fit a gaussian to data in matlab/octave?\
Use Kernel Density Estimation - If you create a kernel density
surface, you could slice the surface at some height (e.g., turning
it into a plateau), giving you a perimeter shape (the shape of the
plateau) around the points. The trick would be to slice it at the
right location though, because you could end up getting no points
outside of the shape, but with the right selection you could easily
get the green shape you drew. This approach will work well and give you the green shape in your example if you choose the slice point wisely (which may be difficult to do). The big drawback of this approach is that it is very computationally expensive. More information:
http://en.wikipedia.org/wiki/Multivariate_kernel_density_estimation
Use alpha shapes to get a general shape the wraps tightly around
the outside perimeter of the point set. Then erode the shape a
little to force some points outside of the shape. I don't have a lot of experience with alpha shapes, but this approach will also be quite computationally expensive. More info:
http://doc.cgal.org/latest/Alpha_shapes_2/index.html
This question can be answered with any type of programming language, cause I would like some help with algorithms, but I prefer Delphi. I have a the task to detect and count multiple shapes (between 1 and N - mostly circular or a Elipse) of random pictures and calculate their middle and return them as coordinates of a picture. The middle of each shape can have a filling (but it doesn't matter). The shapes are at least 1+ pixel away from each other. None of the shapes will like blend in with another or the corner of a picture.
The background of the picture has always the same background color, which actually doesn't matter, cause the borders/frames of the shapes are always a different color compared to the background. This makes it easy to detect the shapes. I was thinking about going pixel by pixel and collect the coordinates and then draw like an invisible rectangle/square around every shape to calculate the middle. Then I also heard about scanline, but I don't think it would be faster in this case. So my question is, how can I calculate:
How many shapes are in the picture.
How can I calculate (more or less) the exact middle of them.
A few pictures to visualize the task:
This is a picture with random shapes (mostly close circles)
As you can see they are apart from each other just fine.
Then I could easily draw/calculate an imaginary rectangle/square around every shape and calculate the middle of it like that:
After I have the rectangles/squares. I can easily calculate the middle.
How do I start?
PS.: I've drawn some circles in mspaint. I have to add that all shapes are CLOSED, which makes it possible to flood fill EVERY shape in the picture with no problems!
Thank you for your help.
Calculate MSER (Maximally stable extremal regions) for the image. I can't explain that algorithm here. You can refer to the Maximally stable extremal regions article for more information about the algorithm.
That will give you centroid too.
This algorithm is implemented as inbuilt functions in OpenCv tool and Matlab 2012b.
Another method which i can think of and possibly simple than previous method is to apply connected components algorithm and count number of objects.More information of this can be found in book by Gonzalez and Woods on Digital Image Processing.