I have to implement a very custom tiled map layer/view on iOS5/6 (iPhone), which loads tiles as Images and/or data (JSON) from a server. It is like google maps, but at certain points very specific, so I cannot easily use a solution such as:
google maps or
route-me (used by MapBox)
CATiledLayer in combination with UIScrollView
The thing is: None of the solutions out there really do help me, because of my specific specs. If you think, there is a suitable solution, please tell me!!!
If not:
Help me to find the best possible solution for my case!!!
"But why can I not use these beautiful solutions?"
There are a few limits, that have to be known:
We only use 3 zoom-levels (0,1 and 2)
Every tile has a number of 9 subtiles in the next zoomlevel (= zoom factor 3) (not like most of the other kits do with 4 subtiles = zoomfactor 2)
The first layer has an initial size of (speaking in pixels / points is the half) 768*1024.
The second layer is three times wider and higher (zoomfactor 3!!!) -> 2304*3072
The third layer is equally wider and higher than the 2nd -> 6912*9216
Each tile that comes from the server is 256x256 pixels
So every sublayer 9-times the number of tiles (12 on 1st layer, 108 on 2nd, 972 on 3rd)
Every tile has a background image (ca. 6KB in size) (loaded as image from the server) and foreground-information (loaded as JSON for every tile from the server - 10-15KB in size)
-> the foreground information JSON contains either an overlay image (such as traffic in google) or local tile information to be drawn into the local tile coordinate space (like annotations, but per tile)
I want to cache the whole background-tiles on disk, as they never change
Either I want to cache the overlay-tile-images/overlay-tile-information for each tile for a certain amount of time, until it should be reloaded again
Zooming should be with pinching and double-tapping
A few of my considerations:
The caching is not the problem. I do it via CoreData or similar
I thought of a UIScrollView, to show smooth scrolling
I'd like to use pinching, so every time I break through the next zoomlevel, I have to draw the next zoom-level tiles
The content should only be drawn in the visible area (for me on iPhone 5 320x500)
Not visible tiles should be deleted, to be memory efficient. But they should not be deleted instantly, only if a tile is away a certain amount of pixels/points from the visible center. There should be a "display-cache", to instantaneously show tiles, which were just loaded and displayed. Or is there a better technique out there?
I can load the background instantly from disk or from the server asynchronously, as I know which tiles are visible. So do I with the tile-JSON. Then I extract the JSON-information for the tile ("is the overlay a direct image or information such as a city name, which I have to draw into the tile") and draw or load the overlay (from DB/Disc or Server)
Is UIScrollView efficient enough to handle the max size of the tile view
Should I use a CALayer as sublayer to draw into it? Should I use Quartz to draw directly on a big "canvas"?
Now it's your turn !!!
Apple has an example that shows zooming into a single image very deep. They use a CATiledLayer for this.
The example can be found here: http://developer.apple.com/library/ios/#samplecode/PhotoScroller/Introduction/Intro.html
The example works with subtiles that are stored on the phone but maybe it can be adapted to your problem.
I have used my own TiledLayer implementation for these type of non-standard tiling issues that CATiledLayer does not support (and because CATiledLayer still has some unsolved bugs).
That implementation is now available on Github.
Fab1n: this is same implementation that I already sent you by e-mail.
I came up with a very neat and nice solution (custom TiledLayer implementation):
I don't use CATiledLayer as contentView of the scrollView anymore. Instead I added a CALayer subclass to it and added my tiles to it as sublayers. Each tile contains the tile bitmap as contents. When zooming in with the scrollview I switch tiles manually based on the current zoomlevel. Thats perfect!!!
If somebody wants to know details of the implementation -> no problem, write a comment and I post it here.
EDIT:
Details about the implementation:
The implementation in fact is really easy:
Add a UIScrollView to your View/Controller View
Add a UIView (from now on referred to as tileView) as contentView to the ScrollView - same size as ScrollView size (this is completely zoomed out mode - factor 1)
Activate zooming (I had minimum zoom factor 1 and max. 9)
When you now place an image into the tileView, zooming into level 9 will give as a very unsharp pixelated image - that's what we want (I'll explain why)
If you like crisp clear images when you zoom in, you should add CALayer instances (tiles) with addSublayer: to tileView
Assuming you have a zoomFactor of 3 (I had - meaning 1 Tile in layer 0 - zoomFactor 1 will consist of 3x3 = 9 tiles in layer 1 - zoomFactor 3)
In layer 0 say you put 3x4 tiles with 243x243 (3 times devidable by 3) pixels (as sublayers to tileView). You put your same size tile images as CALayer contents. Zooming to zoomFactor 3 makes them blurry (like old google maps)
When you hit zoomFactor 3 remove all 12 layers from superlayer and replace them with 3 times smaller ones (81x81 pixels). You get 9 times more layers in a whole.
Now that you have 3 times smaller tile layers, the trick is the .contents property of CALayer. Even if the layer is 81x81, the contents (your image) are still full resolution, meaning if you put a 243x243 pixels image as contents of your 81x81 pixels CALayer tile, on zoomFactor 3 it looks like a 243x243 pixels tile!!!
You can go with that on any zoomFactor (3,9,12) deeper. The tiles get smaller and smaller, but setting the original image as contents, the tile will look crisp and sharp at the exact zoomFactor they are placed in.
If you like your tiles look sharp even between the zoomFactors, you have to set a 3 times taller image as .contents. Then your tile is crisp and sharp even short before you're passing the next zoomLevel.
The one thing you have to figure out is, removing all tiles on a layer when passing the zoomLevel factor is not efficient. You only have to update the visible tiles in rect.
Not the nicest of all solutions, but it works and if someone does a well designed library maybe there is potential for this to be a clean solution.
Related
I'm working on a drawing app where every bezier path is CAShapelayer and I'm adding these sub-layers on super layer UIView(CALayer), once the points/lines exceed a certain threshold eg: 1000 CAShapelayer then the drawing, zoom, and scroll lags, is their a way to optimize this?
Couple options for trying to use thousands of layers...
First, I ran a test on a 3rd-gen iPad Pro, generating 8100 shape layers. While there was a little bit of "lag" when zoomed-out to see the full view, it certainly did't make it unusable... and I notice little to no lag when zoomed in.
Second, instead of using shape layers, you could define your own "layer" struct - tracking path, fill, border, etc. Then override draw() and only draw the paths where their bounding box intersects the draw rect.
Third, instead of using a thousands-of-layers view in your scroll view, use maybe an image view. Each time you add a new layer, draw that layer to the image in the image view. As you zoom it will become fuzzy... so each time the user ends zooming, update the image at the new scale. You'll notice a slight lag as the fuzzy image becomes clear, but that will only happen at the end of the zoom. You could even alleviate that by using "stepped" zooming - such, 100%, 200%, 400%, 800%.
Edit
I put together an Example app that:
generates 95 paths, using the Glyphs for chars "!" through "~" from Times New Roman font
paths have min 4 points, max 115 points; min 0 curves, max 55 curves
we add 33,805 CAShapeLayer (not text) layers, using 6 fill/stroke color combinations, to a 3508 x 2480 view in a scroll view
On an old iPhone 7 running iOS 13.3 ... sure, it has a "little" lag, but not what I would call unusable.
Looks like this at 1.0 Zoom Scale:
You may want to take a look at it and see if it has the same "lag" you're experiencing - https://github.com/DonMag/ShapeLayersWork
Edit 2 - 8137 layers using your hand-drawn "a" path:
Edit 3
"Chalkduster" font
generate a "grid" to fill the 3508 x 2480 view
cycle through paths
put all paths of the same color on the same layer (so 6 layers)
Here's the output:
It took over 20-seconds for the view to become visible, and, as we would expect, it's completely unusable.
The "Points: / Curves:" lines list the number of points and curves per layer -- 4-million points and almost 2-million curves. I really think you're going to need to re-think your whole approach.
As a side note... are you familiar with the Sketch App for Mac? I put some text on some layers, using Chalkduster... converted the layers to outlines (paths instead of text)... and even with a small number of layers Sketch performance gets bad.
I am making a 2d platformer and I decided to use multiple tilemapnodes as my backgrounds. Even with 1 tile map, I get these vertical or horizontal lines that appear and disappear when I'm moving the player around the screen. See image below:
My tiles are 256x256 and I'm storing them in a tileset sks file. Not exactly sure why I'm getting this or how to get rid of this and it is quite annoying. Wondering if others experience this as well.
Considering to not use the tile maps, but I would prefer to use them if I can.
Thanks for any help with this!!!
I had the same issue and was able to solve it by "extruding" the tiled image a couple pixels. This provides a little cushion of pixels to use when the floating point issue occurs instead of displaying nothing (hence the gap). This video sums it up pretty well.
Unity: extruding tile map images
If you're using TexturePacker to generate your sprite atlas' there is an option to add this automatically without having to do it to your tile images yourself.
Hope that helps!
Sort of like the "extruding" suggested by #cheaze, I simply make the tile size in the drawing code a tiny amount larger than the required tile size. This means the assets themselves do not have to be changed.
Eg. if you assets are sized 256 x 256 and all of your calculations are based on that; draw the textures as 256.02 x 256.02 pixels in size:
[SKSpriteNode spriteNodeWithTexture:texture size:CGSizeMake(256.02, 256.02)];
Only adding .02 pixel per side will overlap your tiles automatically and remove the line glitches, depending on your camera speed and frame rate.
If the problem is really bad, you can even go so far as to add half a pixel (+0.5) or an entire pixel to remove the glitches, yet the user will not be able to see the difference. (Since a one pixel difference on a retina screen is hard to distinguish).
I want to have a vector layer with 16 tiles - 4 by 4 and fill every tile with image.
I have problem with coordinates - as image is flat - I don’t know how to calculate them from 0,0 (top left corner) to for example 1023,1023 (bottom right corner)
This is first step to displaying images in hight-resolutions. I have also backend that can serve small pieces of image (almost 1 GiB total size), but I have problem with coordinates for each tile.
I’m appreciate for any suggestions how to split this task to few small steps.
Open Layer version: 3.5
Sounds like you want a tile vector layer - something that OL supports natively. I wouldn't go managing the tiles myself, just use the inbuilt functions already provided.
Have a look at this example and see how the tile map server url is formatted. You should be able to do something similar for yourself.
http://openlayers.org/en/v3.5.0/examples/tile-vector.html
I you have image tiles you don't need a vector layer. You don't manually have to calculate the coordinates of a tile, create a geometry for each tile and then load the image. This is not how it should work. :)
If you tell OpenLayers your tiling schema/grid, it will automatically detect which tiles are required for the current view extent, load the tiles and display them at the right position. Take a look at the following examples, which show different techniques to use custom image tiles:
http://openlayers.org/en/master/examples/xyz.html
http://openlayers.org/en/master/examples/xyz-esri-4326-512.html
http://openlayers.org/en/master/examples/xyz-retina.html
I'm working on yet another drawing app with canvas that is many times bigger than screen.
I need some advice/direction on how to that.
Basically what i want is to scroll around this big canvas, drawing only in visible region.
I was thinking of two approaches:
Have 64x64 (or whatever) "tiles" to draw on, and then on scroll just load new tiles.
Record all user strokes (points) and on scroll calculate which are in specified region, and draw them, using only screen-size canvas.
If this matters, i'm using cocos2d for prototype.
Forget the 2000x200 limitation, I have an open source project that draws 18000 x 18000 NASA images.
I suggest you break this task into two parts. First, scrolling. As was suggested by CodaFi, when you scroll you will provide CATiledLayers. Each of those will be a CGImageRef that you create - a sub image of your really huge canvas. You can then easily support zooming in and out.
The second part is interacting with the user to draw or otherwise effect the canvas. When the user stops scrolling, you then create an opaque UIView subclass, which you add as a subview to your main view, overlaying the view hosting the CATiledLayers. At the moment you need to show this view, you populate it with the proper information so it can draw that portion of your larger canvas properly (say a circle at this point of such and such a color, etc).
You would do your drawing using the drawRect: method of this overlay view. So as the user takes action that changes the view, you do a "setDisplayInRect:" as needed to force iOS to call your drawRect:.
When the user decides to scroll, you need to update your large canvas model with whatever changes the user has made, then remove the opaque overlay, and let the CATiledLayers draw the proper portions of the large image. This transition is probably the most tricky part of the process to avoid visual glitches.
Supposing you have a large array of object definitions used for your canvas. When you need to create a CGImageRef for a tile, you scan through it looking for overlap between the object's frame and the tile's frame, and only then draw those items that are required for that tile.
Many mobile devices don't support textures over 2048x2048. So I would recommend:
make your big surface out of large 2048x2048 tiles
draw only the visible part of the currently visible tile to the screen
you will need to draw up to 4 tiles per frame, in case the user has scrolled to a corner of four tiles, but make sure you don't draw anything extra if there is only one visible tile.
This is probably the most efficient way. 64x64 tiles are really too small, and will be inefficient since there will be a large repeated overhead for the "draw tile" calls.
There is a tiling example in Apples ScrollViewSuite Doesn't have anything to do with the drawing part but it might give you some ideas about how to manage the tile part of things.
You can use CATiledLayer.
See WWDC2010 session 104
But for cocos2d, it might not work.
Greetings,
I'm working on an application inspired by the "ZoomingPDFViewer" example that comes with the iOS SDK. At some point I found the following bit of code:
// to handle the interaction between CATiledLayer and high resolution
// screens, we need to manually set the tiling view's
// contentScaleFactor to 1.0. (If we omitted this, it would be 2.0
// on high resolution screens, which would cause the CATiledLayer
// to ask us for tiles of the wrong scales.)
pageContentView.contentScaleFactor = 1.0;
I tried to learn more about contentScaleFactor and what it does. After reading everything of Apple's documentation that mentioned it, I searched Google and never found a definite answer to what it actually does.
Here are a few things I'm curious about:
It seems that contentScaleFactor has some kind of effect on the graphics context when a UIView's/CALayer's contents are being drawn. This seems to be relevant to high resolution displays (like the Retina Display). What kind of effect does contentScaleFactor really have and on what?
When using a UIScrollView and setting it up to zoom, let's say, my contentView; all subviews of contentView are being scaled, too. How does this work? Which properties does UIScrollView modify to make even video players become blurry and scale up?
TL;DR: How does UIScrollView's zooming feature work "under the hood"? I want to understand how it works so I can write proper code.
Any hints and explanation is highly appreciated! :)
Coordinates are expressed in points not pixels. contentScaleFactor defines the relation between point and pixels: if it is 1, points and pixels are the same, but if it is 2 (like retina displays ) it means that every point has two pixels.
In normal drawing, working with points means that you don't have to worry about resolutions: in iphone 3 (scaleFactor 1) and iphone4 (scaleFactor 2 and 2x resolution), you can use the same coordinates and drawing code. However, if your are drawing a image (directly, as a texture...) and just using normal coordinates (points), you can't trust that pixel to point map is 1 to 1. If you do, then every pixel of the image will correspond to 1 point but 4 pixels if scaleFactor is 2 (2 in x direction, 2 in y) so images could became a bit blurred
Working with CATiledLayer you can have some unexpected results with scalefactor 2. I guess that having the UIView a contentScaleFactor==2 and the layer a contentScale==2 confuse the system and sometimes multiplies the scale. Maybe something similar happens with Scrollview.
Hope this clarifies it a bit
Apple has a section about this on its "Supporting High-Resolution Screens" page in the iOS dev documentations.
The page says:
Updating Your Custom Drawing Code
When you do any custom drawing in your application, most of the time
you should not need to care about the resolution of the underlying
screen. The native drawing technologies automatically ensure that the
coordinates you specify in the logical coordinate space map correctly
to pixels on the underlying screen. Sometimes, however, you might need
to know what the current scale factor is in order to render your
content correctly. For those situations, UIKit, Core Animation, and
other system frameworks provide the help you need to do your drawing
correctly.
Creating High-Resolution Bitmap Images Programmatically If you
currently use the UIGraphicsBeginImageContext function to create
bitmaps, you may want to adjust your code to take scale factors into
account. The UIGraphicsBeginImageContext function always creates
images with a scale factor of 1.0. If the underlying device has a
high-resolution screen, an image created with this function might not
appear as smooth when rendered. To create an image with a scale factor
other than 1.0, use the UIGraphicsBeginImageContextWithOptions
instead. The process for using this function is the same as for the
UIGraphicsBeginImageContext function:
Call UIGraphicsBeginImageContextWithOptions to create a bitmap
context (with the appropriate scale factor) and push it on the
graphics stack.
Use UIKit or Core Graphics routines to draw the content of the
image.
Call UIGraphicsGetImageFromCurrentImageContext to get the bitmap’s
contents.
Call UIGraphicsEndImageContext to pop the context from the stack.
For example, the following code snippet
creates a bitmap that is 200 x 200 pixels. (The number of pixels is
determined by multiplying the size of the image by the scale
factor.)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0,100.0), NO, 2.0);
See it here: Supporting High-Resolution Screens