I'm a new user of Open-layers 3. I have my web page with layer OSM add by OL3, and I added any vectors layers(markers) by ol.layer.Vector. I need to change the vector layer when the zoom level changes. Please
You can even define directly with the creation of the vector layer the maximun and minimun resolutions.
the class ol.layer.Vector has the options:
minResolution: The minimum resolution (inclusive) at which this layer will be visible.
maxResolution: The maximum resolution (exclusive) below which this layer will be visible.
If you don't know the resolution for a map view you can use the method map.getView().getResolution() to find out it
Layers have setVisible()/getVisible() methods, so you could trigger those on your layers at a particular zoom level. Zoom can be derived on 'moveend' events, then you can trigger whether a particular layer is visible or not.
Zoom can be gotten from map.getView().getZoom() (will return the number that is the zoom), then tell the layer you need to either show or hide accordingly.
Layers are held in an Collection(array) object and can be gotten via a map.getLayers() call, then you could choose which one to show/hide. When I add layers I record the order of them so I can get one directly.
map.getLayers().item(0) would return the first layer I added to the layers, (1) the second etc.
Related
The hierarchy I'm using looks something like this:
FRONT
1 - layer with several features (e.g. point data)
2 - layer with several features (e.g. path data)
3 - layer with several features (e.g. region data)
BACK
When the user selects a feature in layer 3 based on an ol.interaction.Select the default behavior is to render the selected feature in front of layer 1. How can I prevent this re-ordering?
I had a nearly similiar problem, when i selected the region - the points and other vector geometries were hidden behind the selected feature. In my usecase an special styling with transparent filling was the solution.
If you want a solution based on the layer rendering i would suggest to handle it with the ZIndex. The ol.interaction.Select doesn't provide a setZIndex but if you take a look on the source code (https://github.com/openlayers/ol3/blob/v3.13.0/src/ol/interaction/selectinteraction.js) you can see that the interaction save the selected feature in an unmanaged Layer which you can access on different ways (map.getLayers().getArray().forEach .... or sth) and i'm sure there you can set the ZIndex. Note default zIndex is 0 and a higher Index will render the layer above and lower one.
Here's how the select interaction works in ol3. The selected feature is marked for being skipped in rendering. In other words, it disappears from the original layer it comes from upon re-rendering after being selected.
It is then added to an other layer that is actually not added to the map using the conventional map.addLayer method, but set to the map using layer.setMap. The layer ends up being drawn on top of all the others, resulting the "selected" feature to be on top of everything else.
This architecture was decided to improve performance. However, you can easily achieve what you want, i.e. have the clicked feature stay where it is and change its style, by simply using the map.forEachFeatureAtPixel method and implement your own concept of selection there. It would be slower, but simple enough to accomplish what you want.
I want to know how I can get the type of a layer and a source in ol3?
Background: I have a map which the user can modify. The user can (de)activate layer from different sources and I want to extract all the settings the user has made. Like the visible layer, the center point, resolution and more to rebuild the map later.
At the moment I want to collect all the layers and it's sources, but I'm not able to get the layer type. So I don't know if it's a Tile or a Image, etc.
Unfortunately object.constructor.name is an empty string. So any other ideas?
Use instanceof e.g. layer instanceof ol.layer.Tile
While changing zoom factors for a map, its layers appear stretched with the latest available data, i.e. lower resolution tiles, images, etc.
Is it possible to disable this effect? I would like to have the layer shown only after its "loadend", but the layer's data will be loaded only if it's visible...
I've digged into the rendering workflow, but it's nto easy to find a consistent way to obtain it for all the renderers.
Is there anyway to totally disable bounds on a map? Using the iOS SDK.
I have a base layer which works well but every time I need to add a new top layer the bounds get magically locked to the whatever the extents of the new layers have. I need people to be able to drag between cities on the map at whole.
Check out the RMCompositeSource tile source, which uses Core Graphics to composite an array of sub-tile sources into single tiles for display, and uses the bounds and zoom limits of the composite source instead. You might get a performance boost out of this as well.
I'm learning the moving object detection using a sequence of frames.
This is an example of two frames. I need to select moved object in the right frame.
I can subtract one frame from another. In the selected area the result would be none zero => that was a movement in that area. But if u look at the right frame, u could see a background selected as well.
Can I somehow separate the car from the background?
i guess the method, when we collecting the background pixels, and than subtract the image from the background is useless on a two frames, right?
You are right that the method does not work very well with only two frames. The method you describe works best when you have one image with only background, which you can then use to compare with new images to look for movement.
It is possible to calculate the movement of the object with only two frames, but then you probably need more advanced methods, such as optical flow or image registration algorithms.