Bounding box only for the visible part of the layer - openlayers-3

I was wondering if there is a way to get the bounding box of not the complete layer, but only the part of the layer that is visible in the current zoom level of the map?
So, I need to get the screen coordinates of the bounding box of the layer drawn on the screen. I could not find a way to achieve this.
EDIT:
Unfortunately this is not solving my problem. This is exactly the point that I got to latest and in some cases it is not working. Since stackoverflow does not allow me to upload images because of my reputation I will try to describe:
Imagine that I have a path which is crossing the screen almost parallel to y axis, however outside the screen it is at least x-axis long. In this case the solution proposed will return min and max screen coordinates for x axis, where it needs to be a short interval that it is crossing the screen. In a way I need the bounding box of the visible part of the layer.
EDIT 2:
Thank you all for your answers. I tried to use "getFeaturesInExtent" function, but I get an error saying: "Uncaught TypeError: undefined is not a function". I am using the latest OpenLayers which is version 3.4.0. I suppose I am getting this error because this function is not implemented in this version.
The way I am using is the following:
var mapExtent = map.getView().calculateExtent(map.getSize());
var features = result.getSource().getFeaturesInExtent(mapExtent);
What kind of solution do you suggest for me? (I tried to use master version downloading ZIP from: https://github.com/openlayers/ol3, but the map did not work in this case.)
Thanks again!

I think the only things what you need is the extent of the view and the extent of the layer and the ol.extent.getIntersection() Function.
You can get the Extent of the current View by f.ex
map.getView().calculateExtent(map.getSize());
The global extent of your layer by
`layer.getExtent()`.
And the intersecting Extent with.
ol.extent.getIntersection(extent1, extent2,opt_extent)
Should return the intersecting extent of the current view and your layer. Be aware that not all mentioned functions are stable.

I found the solution in the following:
var mapExtent = map.getView().calculateExtent(map.getSize());
var intersectedFeatures = [];
for (var i = 0; i < points.length; i++){
if(ol.extent.containsCoordinate(mapExtent, points[i]))
intersectedFeatures.push(points[i]);
}
var visibleLayerExtent = ol.extent.boundingExtent(intersectedFeatures);
Thank you all for the help!

Related

Is it possible to create a Q-Q plot when lacking a coordinate system?

I'm looking to create a Q-Q plot within Rascal using the Vis library. I have been told there is no positional system. Is this true? If true, how would I go about plotting this or any scatterplot? Does anyone have an example of this in use?
That's an excellent question. Certainly Rascal's Vis library is "point free" in the sense that its layout mechanism has no absolute coordinate system. However, there are certain Figure kinds which have a relative coordinate system wrt their own "origin". When you combine several of those using horizontal, vertical or overlay boxes (and align them properly), you can create the effect of bar charts, scatterplots and whatever you desire.
In particular the overlay Figure composition is interesting: http://tutor.rascal-mpl.org/Rascal/Libraries/Vis/Figure/Figure.html#/Rascal/Libraries/Vis/Figure/Figures/overlay/overlay.html
Figure point(num x, num y){ return ellipse(shrink(0.05),fillColor("red"),align(x,y));}
coords = [<0.0,0.0>,<0.5,0.5>,<0.8,0.5>,<1.0,0.0>];
ovl = overlay([point(x,y) | <x,y> <- coords]);
render(ovl);
Produces this (both code and image taken from the documentation linked above):
Each point is an ellipse which is aligned at the (x, y) position relative to the origin of the enclosing overlay box.
The origin by default of this overlay seems to be the upper-left corner, when no other FProperty's are given to the overlay. It's possible other alignment options for the overlay Figure also change the position of its origin.
With the help of Jurgen Vinju I wrote this code, hope it helps someone: https://gist.github.com/rlmhermans/c9e82a6a623b65f0c6957ab3ff2742cf

Endless scrolling over a 3d map

I have some experience with Metal and quite a bit with Unity and am familiar with setting up meshes, buffers, and the backing data for drawing; but not so much the math/shader side. What I'm struggling with is how to get an endless scrolling world. So if I pan far to the right side I can see the left side and keep going.
The application of this would be a seamless terrain that a player could scroll in any direction forever and have it just wrap.
I don't want to duplicate everything on draw and offset it, that seems horrendously inefficient. I am hoping for a way to either use some magic matrix math or some sort of shader to get things wrapping/drawing where they should when panning the map. I've searched all over for some sort of guide or explanation of how to get this working but haven't come up with anything.
I know a lot of old (dos) games did this somehow, is it still possible? Is there a reason why it seems the industry has migrated away from this type of scrolling (bounding to edges vs wrapping)?
I have created a simple example demonstrating what you're looking for (I think).
The basic idea of it is that you draw the map in a repeating grid, using the drawPrimitives(type:vertexStart:vertexCount:instanceCount:) method on MTLRenderCommandEncoder. As the instance count you want to pass in the number of identical maps you want to draw, extending it as far as needed to not see where it ends. In my example I used a simple 5x5 grid.
To not have the user see the edge of the map, we're gonna calculate their position modulo 1 (or whatever size your map is):
func didDrag(dx: CGFloat, dy: CGFloat) {
// Move user position on drag, adding 1 to not get below 0
x += Float(dx) * draggingSpeed + 1
z += Float(dy) * draggingSpeed + 1
x.formTruncatingRemainder(dividingBy: 1)
z.formTruncatingRemainder(dividingBy: 1)
}
This is how it looks:
Just a follow up on what I have actually implemented. First I essentially have an array of x,y points with altitude, terrain type and all that jazz. Using some simple % and additions/subtractions it is trivial to get the nodes around a point to generate triangles
On a draw I calculate the first showing point and the last showing point and calculate the groups of triangles shown between those points. The first/last point take into account wrapping, it is then pretty trivial to have an endless wrapping world. For each group a translation offset is passed via a uniform matrix for that group which will position that section where it should belong.
I set it via renderEncoder.setVertexBytes(&uniform, length:..., offset:...)

Edge detection on pool table

I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.

Check if coordinate rectangle contains CLLocationCoordinate2D

I am using a special Map SDK for iOS and I am adding a custom shape to the map. The shape is always a different size and it could be a circle, square, star etc. the point being it is always dynamic whenever the app is run.
After adding this shape to the map, I can access it's property called overlayBounds which is described as: This property contains the smallest rectangle that completely encompasses the overlay.
The overlay is my shape that I'm adding to the map.
Whenever a location update is generated by CLLocationManager, I want to check and see if the most recent coordinate is inside of that overlayBounds property of the shape.
When accessing overlayBounds, it has an ne property and a sw property. Both of these are just CLLocationCoordinate2D's
So, if the overlayBounds is made up of two CLLocationCoordinate2D's and the CLLocationManager is always updating the user's location and giving me the most recent coordinate(CLLocationCoordinate2D), how can I check if that most recent coordinate is within the overlayBounds?
After doing a lot of research I have only found one potential solution to go off of which is this: https://stackoverflow.com/a/30434618/3344977
But that answer assumes that my overlayBounds property has 4 coordinates(CLLocationCoordinate2D's), when I only have 2.
Your description seems much harder then the actual question. So if I am getting this correctly your question is only to check if the point is inside the rectangle described in overlayBounds.
You have only 2 points as it is enough to define a rectangle. So NE and SW are the two points where the other two are received as (NE.x, SE.y) and (SE.x, NE.y). With this you may use the answer you linked or you may simply construct a MKMapRect where origin is NE and size is SE-NE. So in this case you may simply use MKMapRectMake and then use MKMapRectContainsPoint. BUT watch out when computing size as SE-NE might produce negative results in which cases you need to add degrees to the size. That is 180 to x (latitude) and 360 to y (longitude)...
MKMapRect rect = MKMapRectMake(NE.latitude, NE.longitude, SE.latitude-NE.latitude, SE.longitude-NE.longitude);
if(rect.width < .0) rect.width += 180.0;
if(rect.height < .0) rect.height += 360.0;
BOOL pointInside = MKMapRectContainsPoint(rect, pointOnMap);
Something like this should do the trick.
Now if you are trying to check if the point is inside the shape itself it really depends on how your shape is defined. If this is some form of analytic representation you might find some method already made for you to return the value but if not then your best shot would most likely be drawing the shape to some canvas and checking the color of canvas at the location you need to check. In any case the bigger problem here is converting the point and the rect to a Cartesian coordinate system. If that is the case then just add a comment and I will try to help you on that...

Why does the Sobel function for edge detection fail to find the contour of a white square in a black background?

I tryed to apply to the image the following code in octave:
sq = imread("Square BW.jpg");
figure(1), imshow(Square);
cont1 = edge(sq,"Sobel");
figure(2), imshow(cont1);
The image I get is:
And a similar image appears if I use the Prewitt function. Can anyone explain to me what is happening? The problem is that I can't visualize the process only the result, so I can't understand why the code isn't working.
The problem seems to be how threshold is computed in Octave. You can see how Octave does it by looking at its source by entering type edge at the Octave prompt, or online (I'm not copying the exact code since the code is GPL -- although quite simple)
To get the border, you will need to set the threshold yourself (hopefully, in future versions of Octave's image package this will be fixed but at the moment it's Matlab incompatible since Matlab documentation on their default is unclear).
There's definitely a problem with the way the threshold is computed, however I wasn't able to find the correct value to use in this picture. After many attempts I found this code that seems to work perfectly:
sq = imread("Square BW.jpg");
maskSobel = fspecial("sobel");
mSobel = uint8(zeros(size(BW)));
for i = 0:3
mSobel += imfilter(sq, rot90(maskSobel, i));
end
figure(1), imshow(mSobel);
First we create the Sobel matrix/operator and a zero matrix the same size of the image Square BW. Then we rotate the Sobel matrix four times (by 90 degrees), in order filter the image in all directions (left-right, up-down, right-left and down-up), always adding the result to the mSobel matrix that was created.
Here's the final result:

Resources