Tiling images with 3D perspective - ios

I'm not quite sure if the captioned is my problem. I will explain the challenge first and then probably you can advise whether I'm doing something fundamentally wrong.
I'm creating an iOS application which will let users drag and drop floor and wall tiles on a room (pre-selected room images only) and can see how that tile will look when laid on the room. I have the tile images and the room image and I've defined hotspots (where tile need to be replaced) relative to the room image.
On my room simulator view, I have a UIImageView which holds the room image and certain parts of that image are made transparent. I also have smaller UIImageViews which I have put on top of the room image.
When the user drags and drops the tile image on the smaller image views, I'm creating an UIImage using the method
[sourceImage resizableImageWithCapInsets:UIEdgeInsetsZero resizingMode:UIImageResizingModeTile];
which gives me a tiled image and that image is then set as the source of the smaller image view.
This logic is working but I'm not getting the desired effect. If the user is dropping the tile on the floor, the final image looks like the floor and the walls are on different planes. Please checkout the image below
The view on the right has 2 images, the room images with transparency and a smaller image where you see the floor tile now.
So my question is, can I tile images with some sort of 3D perspective ? I'm afraid I'm not that good in 3D transformations and the 3d matrices looks greek to me.
I guess the problem is mostly because of the angle in which we are viewing the resultant image and the floor tile images should be tiled with that in mind. Or in other words, the size of the tile on the back side of the room should be smaller than the size of the tile in the front side. I may be wrong.
Any help would be greatly appreciated.

A bit late, but what you need is "CIPerspectiveTile":
https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/filter/ci/CIPerspectiveTile
You'll need to create a CIFilter, set the input values (5 - image, 4 perspective points), then create a CIImage, then draw that image out. There are various ways to do so. You can do something as simple as:
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef tiledImage = [context createCGImageFromCIImage:myCIImageWithPerspective fromRect:boundingRect];

Related

OpenLayer3 - create vector layer with coordinates 0,0 to 1023,1023

I want to have a vector layer with 16 tiles - 4 by 4 and fill every tile with image.
I have problem with coordinates - as image is flat - I don’t know how to calculate them from 0,0 (top left corner) to for example 1023,1023 (bottom right corner)
This is first step to displaying images in hight-resolutions. I have also backend that can serve small pieces of image (almost 1 GiB total size), but I have problem with coordinates for each tile.
I’m appreciate for any suggestions how to split this task to few small steps.
Open Layer version: 3.5
Sounds like you want a tile vector layer - something that OL supports natively. I wouldn't go managing the tiles myself, just use the inbuilt functions already provided.
Have a look at this example and see how the tile map server url is formatted. You should be able to do something similar for yourself.
http://openlayers.org/en/v3.5.0/examples/tile-vector.html
I you have image tiles you don't need a vector layer. You don't manually have to calculate the coordinates of a tile, create a geometry for each tile and then load the image. This is not how it should work. :)
If you tell OpenLayers your tiling schema/grid, it will automatically detect which tiles are required for the current view extent, load the tiles and display them at the right position. Take a look at the following examples, which show different techniques to use custom image tiles:
http://openlayers.org/en/master/examples/xyz.html
http://openlayers.org/en/master/examples/xyz-esri-4326-512.html
http://openlayers.org/en/master/examples/xyz-retina.html

iOS Triangular Image view

so I'm making a game and pretty much when the player (which is a triangular shaped rocket) hits an object flying at you (a rock) the game ends. I have everything working well but my problem is the rocket is a triangle yet the image view its in is a rectangle. So if the edge of the image view touches the rock the game will end even though the actual rocket didn't touch the object. So basically how can I make the rock image view not recognize the parts of the rocket image view which are empty? Basically a triangular shaped image view.
Thank you for your help. Let me know if you need more info or want to see the code I have for them to collide.
You analytically present the triangle with 3 points and a rock with a center and radius then find and implement an algorithm checking a hit test between those 2 shapes. Or draw the two shapes onto some graphics context using an appropriate blending and check for overlapping pixels (for instance draw one as red and another as green and look if a pixel that is both red and green exists) you could actually do that with 2 image views having those colors and .5f alpha added on the 3rd invisible view but you would need to get the image from the view and then iterate through all the pixels. In any of the cases do this check only after the corresponding view frames overlap.

How can I show image in laptop screen, with appropriate image rotation and perspective?

I have been searching from last two days on internet, I have checked many source codes on net but none of them has provided the result I want.
The image rotation would have perspective but still there would be no changes in the heights of both left and right sides of an image.
I want to set image inside the laptop screen
Please help me out, Thanks.
So you want to 2D pespective drawing of a laptop screen (on an iOS device?) and put a 2D image on that screen, but with the image transformed so its perspective looks correct on the laptop screen, right?
What you need to do is to add an image view on top of your laptop image view. Lets call it laptopScreenImageView.
Then apply a CATransform3D to that the laptopScreenImageView's layer.
The trick to get 3D perspective out of a CALayer is to modify the .m34 value of the transform. Typically you set the .m34 value to a very small negative number, somewhere around -1/200 to -1/500 (the denominator in the fraction is the z coordinate of the "eye position" for viewing the perspective image, in pixels, or how many pixels "above" the image the viewer's eye should seem to be. I don't fully understand it, to be honest. I fiddle with the .m34 value until I get something that looks right.)
Alternately you could try adding a CATransformLayer to your laptop image view's layer, and then adding a CALayer containing your image as a sublayer of the CATransformLayer. I haven't used CATransformLayers before, but the docs say they are supposed to support layers with 3D perspective, giving you the same effect as modifying the .m34 component of a layer's transform.

CGContextDrawImage Blending

I am drawing, or should I say "stamping", an image using the CGContextDrawImage method in Objective C. The image gets drawn to points that are determined by touch movements. Basically I'm stamping an image to create a "brush" effect. Looks something like this:
I am happy with the results, however when the touch moment slows down the image gets drawn on top of its self and ruins the alpha value I want. Is there a blend technique in which the opacity of the image would not stack on top of each other? Or should I just look at changing my points such that they are not so close together when the movement slows down?
Thanks in advance.

Efficient custom tiled maps with custom zoom levels and zoom factors

I have to implement a very custom tiled map layer/view on iOS5/6 (iPhone), which loads tiles as Images and/or data (JSON) from a server. It is like google maps, but at certain points very specific, so I cannot easily use a solution such as:
google maps or
route-me (used by MapBox)
CATiledLayer in combination with UIScrollView
The thing is: None of the solutions out there really do help me, because of my specific specs. If you think, there is a suitable solution, please tell me!!!
If not:
Help me to find the best possible solution for my case!!!
"But why can I not use these beautiful solutions?"
There are a few limits, that have to be known:
We only use 3 zoom-levels (0,1 and 2)
Every tile has a number of 9 subtiles in the next zoomlevel (= zoom factor 3) (not like most of the other kits do with 4 subtiles = zoomfactor 2)
The first layer has an initial size of (speaking in pixels / points is the half) 768*1024.
The second layer is three times wider and higher (zoomfactor 3!!!) -> 2304*3072
The third layer is equally wider and higher than the 2nd -> 6912*9216
Each tile that comes from the server is 256x256 pixels
So every sublayer 9-times the number of tiles (12 on 1st layer, 108 on 2nd, 972 on 3rd)
Every tile has a background image (ca. 6KB in size) (loaded as image from the server) and foreground-information (loaded as JSON for every tile from the server - 10-15KB in size)
-> the foreground information JSON contains either an overlay image (such as traffic in google) or local tile information to be drawn into the local tile coordinate space (like annotations, but per tile)
I want to cache the whole background-tiles on disk, as they never change
Either I want to cache the overlay-tile-images/overlay-tile-information for each tile for a certain amount of time, until it should be reloaded again
Zooming should be with pinching and double-tapping
A few of my considerations:
The caching is not the problem. I do it via CoreData or similar
I thought of a UIScrollView, to show smooth scrolling
I'd like to use pinching, so every time I break through the next zoomlevel, I have to draw the next zoom-level tiles
The content should only be drawn in the visible area (for me on iPhone 5 320x500)
Not visible tiles should be deleted, to be memory efficient. But they should not be deleted instantly, only if a tile is away a certain amount of pixels/points from the visible center. There should be a "display-cache", to instantaneously show tiles, which were just loaded and displayed. Or is there a better technique out there?
I can load the background instantly from disk or from the server asynchronously, as I know which tiles are visible. So do I with the tile-JSON. Then I extract the JSON-information for the tile ("is the overlay a direct image or information such as a city name, which I have to draw into the tile") and draw or load the overlay (from DB/Disc or Server)
Is UIScrollView efficient enough to handle the max size of the tile view
Should I use a CALayer as sublayer to draw into it? Should I use Quartz to draw directly on a big "canvas"?
Now it's your turn !!!
Apple has an example that shows zooming into a single image very deep. They use a CATiledLayer for this.
The example can be found here: http://developer.apple.com/library/ios/#samplecode/PhotoScroller/Introduction/Intro.html
The example works with subtiles that are stored on the phone but maybe it can be adapted to your problem.
I have used my own TiledLayer implementation for these type of non-standard tiling issues that CATiledLayer does not support (and because CATiledLayer still has some unsolved bugs).
That implementation is now available on Github.
Fab1n: this is same implementation that I already sent you by e-mail.
I came up with a very neat and nice solution (custom TiledLayer implementation):
I don't use CATiledLayer as contentView of the scrollView anymore. Instead I added a CALayer subclass to it and added my tiles to it as sublayers. Each tile contains the tile bitmap as contents. When zooming in with the scrollview I switch tiles manually based on the current zoomlevel. Thats perfect!!!
If somebody wants to know details of the implementation -> no problem, write a comment and I post it here.
EDIT:
Details about the implementation:
The implementation in fact is really easy:
Add a UIScrollView to your View/Controller View
Add a UIView (from now on referred to as tileView) as contentView to the ScrollView - same size as ScrollView size (this is completely zoomed out mode - factor 1)
Activate zooming (I had minimum zoom factor 1 and max. 9)
When you now place an image into the tileView, zooming into level 9 will give as a very unsharp pixelated image - that's what we want (I'll explain why)
If you like crisp clear images when you zoom in, you should add CALayer instances (tiles) with addSublayer: to tileView
Assuming you have a zoomFactor of 3 (I had - meaning 1 Tile in layer 0 - zoomFactor 1 will consist of 3x3 = 9 tiles in layer 1 - zoomFactor 3)
In layer 0 say you put 3x4 tiles with 243x243 (3 times devidable by 3) pixels (as sublayers to tileView). You put your same size tile images as CALayer contents. Zooming to zoomFactor 3 makes them blurry (like old google maps)
When you hit zoomFactor 3 remove all 12 layers from superlayer and replace them with 3 times smaller ones (81x81 pixels). You get 9 times more layers in a whole.
Now that you have 3 times smaller tile layers, the trick is the .contents property of CALayer. Even if the layer is 81x81, the contents (your image) are still full resolution, meaning if you put a 243x243 pixels image as contents of your 81x81 pixels CALayer tile, on zoomFactor 3 it looks like a 243x243 pixels tile!!!
You can go with that on any zoomFactor (3,9,12) deeper. The tiles get smaller and smaller, but setting the original image as contents, the tile will look crisp and sharp at the exact zoomFactor they are placed in.
If you like your tiles look sharp even between the zoomFactors, you have to set a 3 times taller image as .contents. Then your tile is crisp and sharp even short before you're passing the next zoomLevel.
The one thing you have to figure out is, removing all tiles on a layer when passing the zoomLevel factor is not efficient. You only have to update the visible tiles in rect.
Not the nicest of all solutions, but it works and if someone does a well designed library maybe there is potential for this to be a clean solution.

Resources