Prevent tiles loading in OpenLayers 3 if those tiles are absent - openlayers-3

I have a tile-server and a small image in the xyz folders structure, an image that covers only a portion of the world. I have already managed to prevent the loading of Z, of some zoom levels that are not present on that map (Stop loading a tile in OpenLayers 3 tileloadstart event). Now I need to limit the loading for X and Y as well, because many folders and tiles are obviously absent. How can this be done? Could I manage some sort of error thrown on the server side that makes OpenLayers understand not to request those tiles anymore because they are absent? If so, how could this be done?

You can further restrict a tile grid by specifying an extent. You will need to do that using ol.tilegrid.TileGrid although you can get the default origin and resolutions to use from ol.tilegrid.createXYZ
var defaultTileGrid = ol.tilegrid.createXYZ({
minZoom: 2,
maxZoom: 8
});
var restrictedTileGrid = new ol.tilegrid.TileGrid({
extent: restrictedExtent,
minZoom: defaultTileGrid.getMinZoom(),
origin: defaultTileGrid.getOrigin(0),
resolutions: defaultTileGrid.getResolutions(),
});
You can also specify an extent on a layer (but that must be in the view projection, which can be different to the tilegrid projection).
If you know the minimum and maximum X and Y values at the maximum Z value you could calculate the restricted extent as
var restrictedExtent = ol.extent.extend(
defaultTileGrid.getTileCoordExtent([maxZ, minX, minY]),
defaultTileGrid.getTileCoordExtent([maxZ, maxX, maxY])
);
in OpenLayers 6
or
var restrictedExtent = ol.extent.extend(
defaultTileGrid.getTileCoordExtent([maxZ, minX, -minY - 1]),
defaultTileGrid.getTileCoordExtent([maxZ, maxX, -maxY - 1])
);
in earlier versions

Related

Trouble implementing shadows in WebGL

I am trying to implement shadows into my WEBGL 2.0 Project using this tutorial
https://webgl2fundamentals.org/webgl/lessons/webgl-shadows.html
Currently I am getting really bad results like this:
Basically a ton of the terrain is being drawn in shadow that shouldn't be. The light projection is from your camera towards the direction you are looking so hypothetically you shouldn't be able to see any shdaows becuase the light projection is the same as your camera ( I am just doing this for testing until I can get this working properly)
I have everything the same as the tutorial I believe except I am using glMatrix instead of their matrix math library (shouldn't matter I would assume). Here's the thing though. I don't use a model view matrix for anything I am rendering so none of my points are on a -1,1 range. They can go out as far as -3200...ect Its just all one big terrain mesh chunked out.
I think the issue lies with how I am creating the texture matrix
textureMatrix = glMatrix.mat4.create();
glMatrix.mat4.translate(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.scale(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, projectionMatrix);
glMatrix.mat4.invert(lightMatrix,lightMatrix);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, lightMatrix);
I am using the same matrix for the light projection as your normal projection, is that an issue? if anyone could help it would be greatly appreciated.
That's probably because the Y position of your light (in your example, it is much more the distance between the eye and the scene) is too big for the Z size of your shadow volume (the size of your shadow volume in the view direction.) Here if posY is inside the wireframe box :
But if you increase posY too much (i.e. your shapes get out of the shadow volume, they disappear
So you should increase the size of your shadow volume (or shrinken your scene, either way.) You cannot simulate that with the slider because they just give you the control to the two dimensions X and Y dimensions : projWidth and projHeight.
i.e. in the last code in your tutorial page, the latest parameter ("far") for example change it from 10 to 100
const lightProjectionMatrix = settings.perspective
? m4.perspective(
degToRad(settings.fieldOfView),
settings.projWidth / settings.projHeight,
0.5, // near
10) // far
: m4.orthographic(
-settings.projWidth / 2, // left
settings.projWidth / 2, // right
-settings.projHeight / 2, // bottom
settings.projHeight / 2, // top
0.5, // near
100); // far
Then you can increase posY far more :
without having your full code, it is hard to reproduce and help you. Could you not try to just inject your scene into the tutorial code ? You can bind the viewpoint with the source and orientation of the light by using the same inputs : (just adding 0.5 to X to see a bit of shadow and make sure it is properly computed.)
/*const cameraPosition = [settings.cameraX, settings.cameraY, 15];*/
const cameraPosition = [settings.posX+0.5, settings.posY, settings.posZ];
/*const target = [0, 0, 0]; */
const target = [settings.targetX, settings.targetY, settings.targetZ];

Highcharts Vector Plot with connected vectors of absolute length

Scenario: I need to draw a plot that has a background image. Based on the information on that image there have to be multiple origins (let's call them 'targets') that can move over time. The movements of these targets will have to be indicated by arrows/vectors where the first vector originates at the location of the target, the second vector originates where the previous vector ended and so on.
The result should look similar to this:
Plot with targets and movement vectors
While trying to implement this, i stumbled upon different questions:
I would use a chart with combined series: a Scatter plot to add the targets at exact x/y locations and a vector plot to insert the vectors. Would this be a correct way?
Since i want to set each vectors starting point to exact x/y coordinates i use rotationOrigin: 'start'. When i now change vectorLength to something other than 20 the vector is still shifted by 10 pixels (http://jsfiddle.net/Chop_Suey/cx35ptrh/) this looks like a bug to me. Can it be fixed or is there a workaround?
When i define a vector it looks like [x, y, length, direction]. But length is a relative unit that is calculated with some magic relative to the longest vector which is 20 (pixels) by default or whatever i set vectorLength to. Thus, the vectors are not connected and the space between them changes depending on plot size axes min/max). I actually want to corellate the length with the plot axis (which might be tricky since the x-axis and y-axis might have different scales). A workaround could be to add a redraw event and recalculate the vectors on every resize and set the vectorLength to the currently longest vector (which again can be calculated to correlate to the axes). This is very cumbersome and i would prefer to be able to set the vectors somehow like [x1, y1, x2, y2] where (x1/y2) denotes the starting- and (x2/y2) the ending-point of the vector. Is this possible somehow? any recommendations?
Since the background image is not just a decoration but relevant for the displayed data to make sense it should change when i zoom in. Is it possible to 'lock' the background image to the original plot min/max so that when i zoom in, the background image is also zoomed (image quality does not matter)?
Combining these two series shoudn't be problematic at all, and that will be the correct way, but it is necessary to change the prototype functions a bit for that the vectors will draw in a different way. Here is the example: https://jsfiddle.net/6vkjspoc/
There is probably the bug in this module and we will report it as new issue as soon as it is possible. However, we made the workaround (or fix) for that and now it's working well, what you can notice in example above.
Vector length is currently calculated using scale, namely - if vectorLength value is equal to 100 (for example), and vector series has two points which looks like that:
{
type: 'vector',
vectorLength: 100,
rotationOrigin: 'start',
data: [
[1, 50000, 1, 120],
[1, 50000, 2, -120]
]
}
Then the highest length of all points is taken and basing on it the scale is calculated for each point, so first one length is equal to 50, because the algorithm is point.length / lengthMax, what you can deduce from the code below:
H.seriesTypes.vector.prototype.arrow = function(point) {
var path,
fraction = point.length / this.lengthMax,
u = fraction * this.options.vectorLength / 20,
o = {
start: 10 * u,
center: 0,
end: -10 * u
}[this.options.rotationOrigin] || 0;
// The stem and the arrow head. Draw the arrow first with rotation 0,
// which is the arrow pointing down (vector from north to south).
path = [
'M', 0, 7 * u + o, // base of arrow
'L', -1.5 * u, 7 * u + o,
0, 10 * u + o,
1.5 * u, 7 * u + o,
0, 7 * u + o,
0, -10 * u + o // top
];
return path;
}
Regarding your question about defining start and end of a vector by two x, y values, you need to refactor entire series code, so that it won't use the vectorLength at all as like as scale, because you will define the points length. I suspect that will be very complex solution, so you can try to do it by yourself, and let me know about the results.
In order to make it works, you need to recalculate and update vectorLength of your vector series inside of chart.events.selection handler. Here is the example: https://jsfiddle.net/nh7b6qx9/

change openlayers3 zoom sensitivity

I'm currently using OL3 with tiled map (ol.layer.Tile) with 4 zoom levels of tiles. I want the zoom to be more sensitive and allow 8 zoom levels instead of 4, each 2 for each tile level. any ideas ?
According to docs, you can manage zoom setting with several properties in your View object:
maxResolution: The maximum resolution used to determine the resolution constraint. It is used together with minResolution (or
maxZoom) and zoomFactor. If unspecified it is calculated in such a way
that the projection's validity extent fits in a 256x256 px tile. If
the projection is Spherical Mercator (the default) then maxResolution
defaults to 40075016.68557849 / 256 = 156543.03392804097.
minResolution: The minimum resolution used to determine the resolution constraint. It is used together with maxResolution (or
minZoom) and zoomFactor. If unspecified it is calculated assuming 29
zoom levels (with a factor of 2). If the projection is Spherical
Mercator (the default) then minResolution defaults to
40075016.68557849 / 256 / Math.pow(2, 28) = 0.0005831682455839253.
maxZoom: The maximum zoom level used to determine the resolution constraint. It is used together with minZoom (or
maxResolution) and zoomFactor. Default is 28. Note that if
minResolution is also provided, it is given precedence over maxZoom.
minZoom: The minimum zoom level used to determine the resolution constraint. It is used together with maxZoom (or minResolution) and
zoomFactor. Default is 0. Note that if maxResolution is also provided,
it is given precedence over minZoom.
zoomFactor: The zoom factor used to determine the resolution constraint. Default is 2.
Example to get 2 view zoom levels for each tile grid zoom level of your XYZ sourece:
var source = new ol.source.XYZ({
// your source configuration here
});
var resolutions = source.getTileGrid().getResolutions();
var map = new ol.Map({
layers: [
new ol.layer.Tile({
source: source
})
],
target: 'map',
view: new ol.View({
center: [0, 0],
zoom: 0,
maxResolution: resolutions[0],
minResolution: resolutions[resolutions.length-1],
zoomFactor: 1.5
})
});

How to draw tiles on Map from Database?

I am developing a GIS based iOS application using Swift 3.0. I want to draw tiles on Map and tiles images are stored in SQLite Database.
My question is how can I retrieve tile image from database and draw thoses image on Map, as database contains columns zoom (values will be 12, 13, etc.), tile_row (values will be 4122, 3413, etc.), tile_column (values will be 4122, 3413, etc.) and data but I get zoom level value in thousands and I get latitude and longitude values in iOS app, so I need to convert these values to match values in database. I found a way to convert zoom level to 1 to 18 scale but I don't know can I match the tile_row and tile_column value using latitude and longitude.
Also please verify that my code which convert zoom level to 1 to 18 (similar to google map zoom level) is correct:
let zoomLevel = Int(log2(360 / MKCoordinateRegionForMapRect(mapRect).span.longitudeDelta))
Thanks.
In IOS (and most mapping systems) the tile images are stored in a directory structure where each zoom level is a parent directory named with the zoom-level number. Under that directory, there are one or more directories named with the longitudinal tile numbers of the tiles below - e.g., at zoom level 10 there are 1024 tiles across and your directories could be 750, 751, 752, and 753 if that's where their images fell relatively. Under each longitude (x-coordinate) directory are the images (256 X 256 pixels) for that y-coordinate, each named for the x tile coordinate, again out of 1024 at zoom level 10.
To find where you are in those ranges, use MKMapPointForCoordinate(CLLocationCoordinate2D), which will give you the lat (y) and lon (x) map points of a location when fully zoomed out. To get the longitude (y) tile number use:
Int((pow(2.0, Double(z)) as Double) * point.y / 268435456.0)
...where the big number is the total number of points on the x-axis at zoom level 0 (2^20 tiles * 256 pixels / tile). That way if point.x is 1/3 of the big number, the image is the tile 1/3 of the way through 1024, and the integer representing the 256-pixel interval (i.e., tile) that number falls into is the name of the directory.
The latitude (y) map point and tile number are calculated similarly, and that number is the name of the image file. So, at zoom level 10 the image for the tile 752 out of 1024 along the x-axis and 486 out of 1024 along the y-axis would be in the file:
...Documents/Maps/yourDirectory/10/752/486.png
...provided you name your overall map directory Maps and the specific directory for this set of tiles yourDirectory. When you use the overlay, you'd use this directory information along with the rest of the setup to instantiate an MKTileOverlay object. Note that the offsets are from the bottom left corner unless you specify that they're reversed since they're thinking x and y axes (remind you of using CoreGraphics for a UIImage?).
Finally, here's how I calculate the zoom level given two corner points of a region that I want to capture a snapshot for:
let position1 = MKMapPointForCoordinate(bottomRight)
let position2 = MKMapPointForCoordinate(topLeft)
let xPosition1 = position1.x / Setting.shared.mapScale
let xPosition2 = position2.x / Setting.shared.mapScale
let yPosition1 = position1.y / Setting.shared.mapScale
let yPosition2 = position2.y / Setting.shared.mapScale
let relativeSpanX = xPosition1 - xPosition2 // X distance between points relative to size of full map
let relativeSpanY = yPosition1 - yPosition2 // Y distance between points relative to size of full map
let spanForZoom = max(relativeSpanX, relativeSpanY)
startingZoom = max(10, Int(log2(1.0 / spanForZoom)) - 1)
That gets you the zoom level for a tile that fits the size of the area that includes both points, but note that no one standard tile of that size (or any size less than the full map) may include those two points depending on where they lie relative to the grid. For example if they span the prime meridian the first step to 4 tiles at zoom level 1 will separate them, so you may need 2 - 4 tiles of the starting zoom size to get both. Ideally, write a function that tells you the tile number (x and y) that includes a CLLocationCoordinate2D since that gets very handy as you pick, download, and collect your tiles.
While you can get away with a geometrical approach like you're showing to calculate longitudinal / y-axis values, MKMapPointForCoordinate() is indispensable for latitude / y-axis calculations since the mercator map is non-linear as you move north or south, and the function takes care of that for you.
That should get you started, but it's a picky process - one thing to focus on is the fact that the layout is always from the lower left; it's easy to get confused as you gather and label the tiles.
I use the following function to calculate the x and y coordinates for the tile that a point is in for a given zoom level:
func getTileCoordinates(location: CLLocationCoordinate2D, z: Int) -> (x: Int, y: Int)
{
let point = MKMapPointForCoordinate(location)
let locationX = Int((pow(2.0, Double(z)) as Double) * point.x / 268435456.0)
let locationY = Int((pow(2.0, Double(z)) as Double) * point.y / 268435456.0)
return (locationX, locationY)
}
...again, where 268435456.0 is the total number of pixels in the zoom level 20 map along the x or y axis. Note, all of this is for Apples MapKit maps and the functions to display them.

The size of the terrain rendered from heightmap

I'm quite new to XNA so excuse me if I ask a 'silly' question but I couldn't find an answer.
I have a problem with the terrain rendered from a heightmap: the terrain I get is too small, I need something larger for my game but I'd like to keep the heigh tdata updated - so I can check for collisions later. (height data being a 2 dimensional array which holds the heights of each point - in my program it's called 'dateInaltime').
The problem is that if I modify the scale of the terrain, the collision checker will use the old values (from the original/small terrain) so I'll get wrong collision points.
My terrain class looks like this.
How can I make the terrain larger but also extend the height data array?
Change this part:
vertex[x + y * lungime].Position = new Vector3(x, dateInaltime[x, y], -y);
to:
vertex[x + y * lungime].Position = new Vector3(x, dateInaltime[x, y], -y) * new Vector3(10);
It should separate the vertices by a scale of 10 (or whatever number you choose).

Resources