How to dectect every layer in the View OpenLayers3 - openlayers-3

I'm trying to detect all the layers seen in the view of the map (OpenLayers 3).
I've tried this method but it works just for a pixel.
map.forEachLayerAtPixel(evt.pixel, function(layer){
// And I edit the layer...
});
Is there any function that allow me to do it?
thanks.

You should be able to loop through the layers and check if the extent intersects with the view extent. That will at least get you the layers that have some pixels within the current view.
var viewExtent = map.getView().calculateExtent(map.getSize());
var layersInView = [];
map.getLayers().forEach(function (layer) {
var layerExtent = layer.getExtent();
if (ol.extent.intersects(layerExtent, viewExtent)) {
layersInView.push(layer);
}
});
I'm not sure how you'd tell if a layer is actually visible to the user, but this might get you closer.

Related

OpenLayers - lock rotation of box or rectangle geometry while modifying

Openlayers provides useful functions for drawing boxes and rectangles and also has ol.geom.Geometry.prototype.rotate(angle, anchor) for rotating a geometry around a certain anchor. Is it possible to lock the rotation of a box/rectangle while modifying it?
Using the OpenLayers example located here to draw a box with a certain rotation to illustrate the point:
I would like the box/rectangle to maintain its rotation while still being able to drag the sides longer and shorter. Is there a simple way to achieve this?
Answering with the solution I came up with.
First of all, add the feature(s) to a ModifyInteraction so you are able to modify by dragging the corners of the feature.
this.modifyInteraction = new Modify({
deleteCondition: eventsCondition.never,
features: this.drawInteraction.features,
insertVertexCondition: eventsCondition.never,
});
this.map.addInteraction(this.modifyInteraction);
Also, add event handlers upon the events "modifystart" and "modifyend".
this.modifyInteraction.on("modifystart", this.modifyStartFunction);
this.modifyInteraction.on("modifyend", this.modifyEndFunction);
The functions for "modifystart" and "modifyend" look like this.
private modifyStartFunction(event) {
const features = event.features;
const feature = features.getArray()[0];
this.featureAtModifyStart = feature.clone();
this.draggedCornerAtModifyStart = "";
feature.on("change", this.changeFeatureFunction);
}
private modifyEndFunction(event) {
const features = event.features;
const feature = features.getArray()[0];
feature.un("change", this.changeFeatureFunction);
// removing and adding feature to force reindexing
// of feature's snappable edges in OpenLayers
this.drawInteraction.features.clear();
this.drawInteraction.features.push(feature);
this.dispatchRettighetModifyEvent(feature);
}
The changeFeatureFunction is below. This function is called for every single change which is done to the geometry as long as the user is still modifying/dragging one of the corners. Inside this function, I made another function to adjust the modified rectangle into a rectangle again. This "Rectanglify"-function moves the corners which are adjacent to the corner which was just moved by the user.
private changeFeatureFunction(event) {
let feature = event.target;
let geometry = feature.getGeometry();
// Removing change event temporarily to avoid infinite recursion
feature.un("change", this.changeFeatureFunction);
this.rectanglifyModifiedGeometry(geometry);
// Reenabling change event
feature.on("change", this.changeFeatureFunction);
}
Without going into too much detail, the rectanglify-function needs to
find rotation of geometry in radians
inversely rotate with radians * -1 (e.g. geometry.rotate(radians * (-1), anchor) )
update neighboring corners of the dragged corner (easier to do when we have a rectangle which is parallel to the x and y axes)
rotate back with the rotation we found in 1
--
In order to get the rotation of the rectangle, we can do this:
export function getRadiansFromRectangle(feature: Feature): number {
const coords = getCoordinates(feature);
const point1 = coords[0];
const point2 = coords[1];
const deltaY = (point2[1] as number) - (point1[1] as number);
const deltaX = (point2[0] as number) - (point1[0] as number);
return Math.atan2(deltaY, deltaX);
}

How to fit route bounds using CARTO-Mobile-SDK

i calculate the route betwen two points, and i get the polygon produced by the separation of this two points, i create the polygon in this way
let polygon = NTPolygon(poses: vector, style: NTPolygonStyleBuilder().buildStyle())
so, i am creating a functionality for when the route between this 2 points is to large you can press the button and the map will zoom out and show the bounding box of the route, for that i get the bounding box from the polygon polygon.getBounds() and i am trying to use map.move(toFit: NTMapBounds!, screenBounds: NTScreenBounds!, integerZoom: Bool, durationSeconds: Float) but i dont how to get NTScreenBounds
Any help whit this issue, also any other approach than using map.move is welcome.
Thanks in advance
NTScreenBounds, in this context, is the layout of your NTMapView.
Here's an example from Xamarin.iOS, you should get the gist of it:
public ScreenBounds FindScreenBounds()
{
var min = new ScreenPos(Frame.X, Frame.Y);
var max = new ScreenPos(Frame.Width, Frame.Height);
return new ScreenBounds(min, max);
}

Use of 'drawPolygonGeometry()' on postCompose event with vectorContext

I'm trying to draw a Circle around every kind of geometry (could be every ol.geom type: point,polygon etc.) in an event called on 'postcompose'. The purpose of this is to create an animation when a certain feature is selected.
listenerKeys.push(map.on('postcompose',
goog.bind(this.draw_, this, data)));
this.draw_ = function(data, postComposeRender){
var extent = feature.getGeometry().getExtent();
var flashGeom = new ol.geom.Polygon.fromExtent(extent);
var vectorContext = postComposeRender.vectorContext;
...//ANIMATION CODE TO GET THE RADIUS WITH THE ELAPSED TIME
var imageStyle = this.getStyleSquare_(radius, opacity);
vectorContext.setImageStyle(imageStyle);
vectorContext.drawPolygonGeometry(flashGeom, null);
}
The method
drawPolygonGeometry( {ol.geom.Polygon} , {ol.feature} )
is not working. However, it works when I use the method
drawPointGeometry({ol.geom.Point}, {ol.feature} )
Even if the type of flashGeom is
ol.geom.Polygon that I just built from an extent. I don't want to use this method because extents from polygons could be received and it animates for every point of the polygon...
Finally, after analyzing the way drawPolygonGeometry in OL3 works in the source code, I realized that I need to to apply the style with this method before :
vectorContext.setFillStrokeStyle(imageStyle.getFill(),
imageStyle.getStroke());
DrawPointGeometry and drawPolygonGeometry are not using the same style instance.

CALayer delegate is only called occasionally, when using Swift

I'm new to IOS and Swift, so I've started by porting Apple's Accelerometer example code to Swift.
This was all quite straightforward. Since the Accelerometer API has been deprecated, I used Core Motion instead, and it works just fine. I also switched to a storyboard.
The problem I have is that my layer delegate is only rarely called. It will go for a few minutes and never get called, and then it will get called 40 times a second, and then go back to not being called. If I context switch, the delegate will get called, and one of the sublayers will be displayed, but there are 32 sublayers, and I've yet to see them all get drawn. What's drawn seems to be fine - the problem is just getting the delegate to actually get called when I call setNeedsDisplay(), and getting all of the sublayers to get drawn.
I've checked to be sure that each sublayer has the correct bounds and frame dimensions, and I've checked to make sure that setNeedsDisplay() gets called after each accelerometer point is acquired.
If I attach an instrument, I see that the frame rate is usually zero, but occasionally it will be some higher number.
My guess is that the run loop isn't cycling. There's actually nothing in the run loop, and I'm not sure where to put one. In the ViewDidLoad delegate, I set up an update rate for the accelerometer, and call a function that updates the sublayers in the view. Everything else is event driven, so I don't know what I'd do with a run loop.
I've tried creating CALayers, and adding them as sublayers. I've also tried making the GraphViewSegment class a UIView, so it has it's own layer.
The version that's written in Objective C works perfectly reliably.
The way that this application works, is that acceleration values show up on the left side of the screen, and scroll to the right. To make it efficient, new acceleration values are written into a small sublayer that holds a graph for 32 time values. When it's full, that whole sublayer is just moved a pixel at a time to the right, and a new (or recycled) segment takes its place at the left side of the screen.
Here's the code that moves unchanged segments to the right by a pixel:
for s: GraphViewSegment in self.segments {
var position = s.layer.position
position.x += 1.0;
s.layer.position = position;
//s.layer.hidden = false
s.layer.setNeedsDisplay()
}
I don't think that the setNeedsDisplay is strictly necessary here, since it's called for the layer when the segment at the left gets a new line segment.
Here's how new layers are added:
public func addSegment() -> GraphViewSegment {
// Create a new segment and add it to the segments array.
var segment = GraphViewSegment(coder: self.coder)
// We add it at the front of the array because -recycleSegment expects the oldest segment
// to be at the end of the array. As long as we always insert the youngest segment at the front
// this will be true.
self.segments.insert(segment, atIndex: 0)
// this is now a weak reference
// Ensure that newly added segment layers are placed after the text view's layer so that the text view
// always renders above the segment layer.
self.layer.insertSublayer(segment.layer, below: self.text.layer)
// Position it properly (see the comment for kSegmentInitialPosition)
segment.layer.position = kSegmentInitialPosition;
//println("New segment added")
self.layer.setNeedsDisplay()
segment.layer.setNeedsDisplay()
return segment;
}
At this point I'm pretty confused. I've tried calling setNeedsDisplay all over the place, including the owning UIView. I've tried making the sublayers UIViews, and I've tried making them not be UIViews. No matter what I do, the behavior is always the same.
Everything is set up in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
pause.possibleTitles?.setByAddingObjectsFromArray([kLocalizedPause, kLocalizedResume])
isPaused = false
useAdaptive = false
self.changeFilter(LowpassFilter)
var accelerometerQueue = NSOperationQueue()
motionManager.accelerometerUpdateInterval = 1.0 / kUpdateFrequency
motionManager.startAccelerometerUpdatesToQueue(accelerometerQueue,
withHandler:
{(accelerometerData: CMAccelerometerData!, error: NSError!) -> Void in
self.accelerometer(accelerometerData)})
unfiltered.isAccessibilityElement = true
unfiltered.accessibilityLabel = "unfiltered graph"
filtered.isAccessibilityElement = true
filtered.accessibilityLabel = "filtered graph"
}
func accelerometer (accelerometerData: CMAccelerometerData!) {
if (!isPaused) {
let acceleration: CMAcceleration = accelerometerData.acceleration
filter.addAcceleration(acceleration)
unfiltered!.addPoint(acceleration.x, y: acceleration.y, z: acceleration.z)
filtered!.addPoint(filter.x, y: filter.y, z: filter.z)
//unfiltered.setNeedsDisplay()
}
}
Any idea?
I quite like Swift as a language - it takes the best parts of Java and C#, and adds some nice syntactic sugar. But this is driving me spare! I'm sure it's some little thing that I've overlooked, but I can't figure out what.
Since you've created a new NSOperationQueue for your accelerometer updates handler, everything that handler calls is also running in a separate queue, sequestered from the main run loop. I'd suggest either running that handler on the main thread NSOperationQueue.mainQueue() or moving anything that could update the UI back to the main thread via a block on the main queue:
NSOperationQueue.mainQueue().addOperationWithBlock {
// do UI stuff here
}

add mouseevents to webgl objects

im using xtk to visualize medical data in a webgl canvas. currently im playing around with this lesson:
lesson 10
this library is pretty good but not very well documented. i want to get rid of that gui and add some mouseevents. if i load the mesh from the gui how can i add a mouse event to the mesh? i actually don't know where to start. it's a little bit confusing to get started with this library....
i tried
mesh.click(function(){
alert("yes");
})
or
mesh.mousedown(function(){
alert("yes");
}
Objects rendered in WebGL are not part of the DOM, and as such don't generate events like DOM elements do. This means that for events like these you have to implement the mouse interaction code yourself.
Traditionally in WebGL/OpenGL this process is known as "Picking", and there's several decent resources for it online. (For example: http://webgldemos.thoughtsincomputation.com/engine_tests/picking) The core process is something like this, though:
For each pickable object in your scene, assign it a color. Put this in a lookup table somewhere
Re-render the entire scene to a texture, rendering each pickable object with it's assigned color
Once the scene is rendered, determine your mouse coordinates and read back the color of the texture at that X/Y.
Fetch the object associated with that color from your lookup table. This is the object your mouse cursor is pointing at!
As you can see, while not a difficult method conceptually this also involves several mid-level WebGL topics, such as rendering to a texture, and as such is not usually recommended for beginners. I'm not sure if there are any features in xtk to assist with this (honestly I had never heard of the library before your post), but I would guess that this is something that you'll have to implement on your own.
DOM events are not supported but you can do it with xtk. Check out this JSFiddle
http://jsfiddle.net/haehn/r7Ugf/
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube and a sphere
cube = new X.cube();
sphere = new X.sphere();
sphere.center = [-20, 0, 0];
r.interactor.onMouseMove = function() {
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id != 0) {
// grab the object and turn it red
r.get(_id).color = [1, 0, 0];
} else {
// no object under the mouse
cube.color = [1, 1, 1];
sphere.color = [1, 1, 1];
}
r.render();
}
r.interactor.onMouseDown = function(left, middle, right) {
// only observe right mouse clicks
if (!right) return;
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id == sphere.id) {
// turn the sphere green
sphere.color = [0, 1, 0];
r.render();
}
}
r.add(cube); // add the cube to the renderer
r.add(sphere); // and the sphere as well
r.render(); // ..and render it
Easy, no?
XTK implements picking the way Toji explained (i.e. with a frameBuffer where every object is rendered in a different RGBA "color"). It will work while you have less than 255^4 objects, so almost always. There are other methods like unprojecting but they would be longer I think.
So with X.renderer.pick and X.renderer.get you can find the object under the mouse and change its properties. However for the moment you can only change vizualisation properties (see the setGetter and setSetter in every class) but you cannot move an X.object (since X.object._transform attribute is private and there is no getter/setter for it yet).
That's something interesting to deal with : adding a pair of getter/setter for X.object's transform would allow, for example, an user to put medical stuff (modelized by a mesh or something else) in the scene and place to mesure distances or see if it will fit for an operation or something like that. Shouldn't be a good idea Haehn ? And it's a minor change in the framework.

Resources