blackberry efficient scrollable horizontal lists - blackberry

I have a group of images which I would like to display in a horizontal list, the size of the list will vary depending on what option a user selects.
However, I am unsure of the best way to implement a list if I have a large number of images to supply to a it, therefore generating many fields that need to be drawn.
My idea to make the list efficient:
-Store n images in a circular array.
-Display the first 3 images in3 views on the Screen that are visible to the user (e.g. <-- img1 img2 img3 -->)
-Keep a record of what's on display..
-When a user scrolls left or right the next/previous image in the array is displayed.
E.g scrolling right once will give me ( <-- img2 img3 img4 --> )
E.g scrolling left 3 times from the above point will give me ( < -- img(n-1) img(n) img1 --> )
and so on...
What would be the best way to do the above, or are there any better ways?
I would be grateful if someone could direct me to the relevant documentation, api methods that I should use as well.
Is there already a method in the api that can recycle views in a similar fashion...?
I'm using ver v5.0.
I would be grateful for any help.
Thanks in advance.

have you tried PictureScrollField
A slider component that draws a row of
images which can be scrolled from
side-to-side using the track-ball or
touch gestures. The images slide
horizontally to align the focus image
in a vertically centered position. The
images decelerate as they approach
their new position to give an animated
effect. There are also several
configurable effects to highlight the
focus image.
All images are allocated the same
amount of space on slider (as defined
by the constructor's imageWidth and
imageHeight parameters). Images can
differ from that size in which case
the scroll field behaves as follows:
Images are NOT resized. If they are
larger than the allocated drawing area
they are center aligned and cropped to
fit the allocated area. If they are
smaller than the allocated drawing
area they are center aligned in the
allocated area.
Since: BlackBerry API 5.0.0

Related

Trim transparency of an UIImage

I was wondering what would be the best way to trim the "canvas" of an UIImage (pretty much like any image editor allows out there)
Now, the previous example is not a single UIImage. It's actually 2 UIViews. So clipping the superview against the blue box would do the trick, but I guess I am looking into the best possible way to do this. Given that there could be several blue boxes in the "canvas".
Is there a faster way than going through every pixel?
Thanks!
Thinking about it algorithmically, I would say no. You need to find the pixel that extends furthest to the left, right, top and bottom. Unless you look at every pixel from each direction you could miss non-transparent pixels.
You could speed things up if you figure out how to map your image into memory and then index into memory directly rather than using a high level function that fetches pixels. I would suggest searching from the top down (which would be sequential memory accesses) until you find a non-clear pixel. Then search from the end of the image backwards, which would give you the bottom-most pixel.
You would then want to limit your search from each side to only look starting at the first non-transparent pixel from the top and ending at the last non-transparent pixel on the bottom.
For anything other than a very large image this should take a fraction of a second.
Ok, I was being dumb. The union of the subviews is all I really needed, so its just a simple loop over the subviews and doing a CGRect union against their frames.

Strategy for scrolling a small area of content in SpriteKit

I'm creating an adventure game in Swift and allow the player to view their inventory. I have a very small set of items you can acquire (only about 25 items for your inventory) and I'd like them to display about 5-6 at a time in a rectangle. My thought was the player can scroll through them by swiping horizontally, and it will take them through the whole list, only ever showing 5-6 at a time across. The entire area is roughly 1/4 of the size of the screen.
I was looking at something like this https://github.com/crashoverride777/Swift-SpriteKit-UIScrollView-Helper but when I tried it, it seems to be suited to a giant area (the entire screen) and the items then scroll off the screen when you scroll. I played with the content size thinking of it as a "viewport" but didn't have ay luck.
In my case, I want the items to scroll only within the confines of a
300 x 150 rectangle or so. (so the item does not go beyond the width of the box containing it).
I couldn't really figure out a reliable way of doing this and wanted to ask someone if they've done something similar and how they achieved it. What's a good strategy for this? Perhaps a camera + pan using SKCameraNode?
Thanks so much!
I think I can do it using a cropping mask - an initial test worked. Let me post something once I can figure it out but wanted to let anyone know in case they were wondering.

Efficient custom tiled maps with custom zoom levels and zoom factors

I have to implement a very custom tiled map layer/view on iOS5/6 (iPhone), which loads tiles as Images and/or data (JSON) from a server. It is like google maps, but at certain points very specific, so I cannot easily use a solution such as:
google maps or
route-me (used by MapBox)
CATiledLayer in combination with UIScrollView
The thing is: None of the solutions out there really do help me, because of my specific specs. If you think, there is a suitable solution, please tell me!!!
If not:
Help me to find the best possible solution for my case!!!
"But why can I not use these beautiful solutions?"
There are a few limits, that have to be known:
We only use 3 zoom-levels (0,1 and 2)
Every tile has a number of 9 subtiles in the next zoomlevel (= zoom factor 3) (not like most of the other kits do with 4 subtiles = zoomfactor 2)
The first layer has an initial size of (speaking in pixels / points is the half) 768*1024.
The second layer is three times wider and higher (zoomfactor 3!!!) -> 2304*3072
The third layer is equally wider and higher than the 2nd -> 6912*9216
Each tile that comes from the server is 256x256 pixels
So every sublayer 9-times the number of tiles (12 on 1st layer, 108 on 2nd, 972 on 3rd)
Every tile has a background image (ca. 6KB in size) (loaded as image from the server) and foreground-information (loaded as JSON for every tile from the server - 10-15KB in size)
-> the foreground information JSON contains either an overlay image (such as traffic in google) or local tile information to be drawn into the local tile coordinate space (like annotations, but per tile)
I want to cache the whole background-tiles on disk, as they never change
Either I want to cache the overlay-tile-images/overlay-tile-information for each tile for a certain amount of time, until it should be reloaded again
Zooming should be with pinching and double-tapping
A few of my considerations:
The caching is not the problem. I do it via CoreData or similar
I thought of a UIScrollView, to show smooth scrolling
I'd like to use pinching, so every time I break through the next zoomlevel, I have to draw the next zoom-level tiles
The content should only be drawn in the visible area (for me on iPhone 5 320x500)
Not visible tiles should be deleted, to be memory efficient. But they should not be deleted instantly, only if a tile is away a certain amount of pixels/points from the visible center. There should be a "display-cache", to instantaneously show tiles, which were just loaded and displayed. Or is there a better technique out there?
I can load the background instantly from disk or from the server asynchronously, as I know which tiles are visible. So do I with the tile-JSON. Then I extract the JSON-information for the tile ("is the overlay a direct image or information such as a city name, which I have to draw into the tile") and draw or load the overlay (from DB/Disc or Server)
Is UIScrollView efficient enough to handle the max size of the tile view
Should I use a CALayer as sublayer to draw into it? Should I use Quartz to draw directly on a big "canvas"?
Now it's your turn !!!
Apple has an example that shows zooming into a single image very deep. They use a CATiledLayer for this.
The example can be found here: http://developer.apple.com/library/ios/#samplecode/PhotoScroller/Introduction/Intro.html
The example works with subtiles that are stored on the phone but maybe it can be adapted to your problem.
I have used my own TiledLayer implementation for these type of non-standard tiling issues that CATiledLayer does not support (and because CATiledLayer still has some unsolved bugs).
That implementation is now available on Github.
Fab1n: this is same implementation that I already sent you by e-mail.
I came up with a very neat and nice solution (custom TiledLayer implementation):
I don't use CATiledLayer as contentView of the scrollView anymore. Instead I added a CALayer subclass to it and added my tiles to it as sublayers. Each tile contains the tile bitmap as contents. When zooming in with the scrollview I switch tiles manually based on the current zoomlevel. Thats perfect!!!
If somebody wants to know details of the implementation -> no problem, write a comment and I post it here.
EDIT:
Details about the implementation:
The implementation in fact is really easy:
Add a UIScrollView to your View/Controller View
Add a UIView (from now on referred to as tileView) as contentView to the ScrollView - same size as ScrollView size (this is completely zoomed out mode - factor 1)
Activate zooming (I had minimum zoom factor 1 and max. 9)
When you now place an image into the tileView, zooming into level 9 will give as a very unsharp pixelated image - that's what we want (I'll explain why)
If you like crisp clear images when you zoom in, you should add CALayer instances (tiles) with addSublayer: to tileView
Assuming you have a zoomFactor of 3 (I had - meaning 1 Tile in layer 0 - zoomFactor 1 will consist of 3x3 = 9 tiles in layer 1 - zoomFactor 3)
In layer 0 say you put 3x4 tiles with 243x243 (3 times devidable by 3) pixels (as sublayers to tileView). You put your same size tile images as CALayer contents. Zooming to zoomFactor 3 makes them blurry (like old google maps)
When you hit zoomFactor 3 remove all 12 layers from superlayer and replace them with 3 times smaller ones (81x81 pixels). You get 9 times more layers in a whole.
Now that you have 3 times smaller tile layers, the trick is the .contents property of CALayer. Even if the layer is 81x81, the contents (your image) are still full resolution, meaning if you put a 243x243 pixels image as contents of your 81x81 pixels CALayer tile, on zoomFactor 3 it looks like a 243x243 pixels tile!!!
You can go with that on any zoomFactor (3,9,12) deeper. The tiles get smaller and smaller, but setting the original image as contents, the tile will look crisp and sharp at the exact zoomFactor they are placed in.
If you like your tiles look sharp even between the zoomFactors, you have to set a 3 times taller image as .contents. Then your tile is crisp and sharp even short before you're passing the next zoomLevel.
The one thing you have to figure out is, removing all tiles on a layer when passing the zoomLevel factor is not efficient. You only have to update the visible tiles in rect.
Not the nicest of all solutions, but it works and if someone does a well designed library maybe there is potential for this to be a clean solution.

Checking for overlapping images with a hole in an image

I have two image views. They are "puzzle pieces" I want to test if one fits inside the other. Not that the frames overlap. I guess its a CGRect thing... but seems like they test the outer boundaries. Any ideas would be appreciated? Thanks.
Just brainstorming here... Maybe this will get you thinking of something that will work for you. If the images do not overlap, then drawing image A on top of image B will result in the same image as drawing image B on top of image A. If they overlap, that will result in different images. You could do something like draw image A, then B. Create a checksum of the result, draw A again, and checksum that. If the checksums match, the puzzle piece fits.
If you have a 1-bit mask that represents each image, then ORing them together and XORing them together will have the same result if they don't overlap and different results if they do.
Do you know the correct order of pieces beforehand? May be it's better assign the tag to each UIImageView which will represent the image's index number. Then you just create a kind of mesh and check in which cell the piece was placed. If the cell number and UIImageView tag match - then this is the right place.
If you have only two images and one must fit to the specific area in another, you could store the frame of this hole and check if the piece is placed somewhere around the centre of this frame. It'll be more user-friendly because when you're checking pixels or bit masks you want the user be extremely precise. Or your comparison code should allow some shifts and will be very complicated.
But if you don't want to hardcode the hole frame you could calculate it dynamically (just find transparent areas in the image). Anyway, this solution will be more effective then checking bit match on the fly.

Painting app with huge canvas

I'm working on yet another drawing app with canvas that is many times bigger than screen.
I need some advice/direction on how to that.
Basically what i want is to scroll around this big canvas, drawing only in visible region.
I was thinking of two approaches:
Have 64x64 (or whatever) "tiles" to draw on, and then on scroll just load new tiles.
Record all user strokes (points) and on scroll calculate which are in specified region, and draw them, using only screen-size canvas.
If this matters, i'm using cocos2d for prototype.
Forget the 2000x200 limitation, I have an open source project that draws 18000 x 18000 NASA images.
I suggest you break this task into two parts. First, scrolling. As was suggested by CodaFi, when you scroll you will provide CATiledLayers. Each of those will be a CGImageRef that you create - a sub image of your really huge canvas. You can then easily support zooming in and out.
The second part is interacting with the user to draw or otherwise effect the canvas. When the user stops scrolling, you then create an opaque UIView subclass, which you add as a subview to your main view, overlaying the view hosting the CATiledLayers. At the moment you need to show this view, you populate it with the proper information so it can draw that portion of your larger canvas properly (say a circle at this point of such and such a color, etc).
You would do your drawing using the drawRect: method of this overlay view. So as the user takes action that changes the view, you do a "setDisplayInRect:" as needed to force iOS to call your drawRect:.
When the user decides to scroll, you need to update your large canvas model with whatever changes the user has made, then remove the opaque overlay, and let the CATiledLayers draw the proper portions of the large image. This transition is probably the most tricky part of the process to avoid visual glitches.
Supposing you have a large array of object definitions used for your canvas. When you need to create a CGImageRef for a tile, you scan through it looking for overlap between the object's frame and the tile's frame, and only then draw those items that are required for that tile.
Many mobile devices don't support textures over 2048x2048. So I would recommend:
make your big surface out of large 2048x2048 tiles
draw only the visible part of the currently visible tile to the screen
you will need to draw up to 4 tiles per frame, in case the user has scrolled to a corner of four tiles, but make sure you don't draw anything extra if there is only one visible tile.
This is probably the most efficient way. 64x64 tiles are really too small, and will be inefficient since there will be a large repeated overhead for the "draw tile" calls.
There is a tiling example in Apples ScrollViewSuite Doesn't have anything to do with the drawing part but it might give you some ideas about how to manage the tile part of things.
You can use CATiledLayer.
See WWDC2010 session 104
But for cocos2d, it might not work.

Resources