I've just started using spritesheets in corona so this might be a basic question but I can't find any info regarding it.
Anyway, is it possible to scale the images inside the spritesheet? For example, frame 1 image 1 would be 100x25, frame 2 image 2 would be set to 100x50, frame 3 image 3 150x50.
You can try sprite.newSpriteMultiSet() function
Here is the link:
http://docs.coronalabs.com/api/library/sprite/newSpriteMultiSet.html
Related
I have the following elements:
Canvas1 (1 rectangle/material - initial instructions),
Canvas2 (1 rectangle/material - core image & 1 audio with its Playback controller),
Canvas3 (1 rectangle/material - trivia image).
The effect I want: when opening I see Canvas1 visible, tap 1 the Canvas2 & Audio, tap 2 the Canvas3. END
The problem: It is not working.
What happens?
Opens as planned (Canvas1).
On tap1 as planned (Canvas2 & Audio).
On tap keeps Canvas 2 on the back and shows Canvas 3.
If I tap it keeps changing between Canvas 2 and 3.
The new patches structure
This is an image for the patches editor image for the patches editor
I solved the problem by simplifying the solution. I built an animation with the images (former Canvas 1, 2 and 3) inside a rectangle, and added the Audio to a tap trigger. Now it is working. Nevertheless, if somebody has an answer to why I had the problem, I very much appreciate it.
Attached you can find the final patches configuration.final patches configuration
Try to connect the visibility of Canvas 2 to the counter value just like you did with the other ones. Check the corrected patch image
I want to have a vector layer with 16 tiles - 4 by 4 and fill every tile with image.
I have problem with coordinates - as image is flat - I don’t know how to calculate them from 0,0 (top left corner) to for example 1023,1023 (bottom right corner)
This is first step to displaying images in hight-resolutions. I have also backend that can serve small pieces of image (almost 1 GiB total size), but I have problem with coordinates for each tile.
I’m appreciate for any suggestions how to split this task to few small steps.
Open Layer version: 3.5
Sounds like you want a tile vector layer - something that OL supports natively. I wouldn't go managing the tiles myself, just use the inbuilt functions already provided.
Have a look at this example and see how the tile map server url is formatted. You should be able to do something similar for yourself.
http://openlayers.org/en/v3.5.0/examples/tile-vector.html
I you have image tiles you don't need a vector layer. You don't manually have to calculate the coordinates of a tile, create a geometry for each tile and then load the image. This is not how it should work. :)
If you tell OpenLayers your tiling schema/grid, it will automatically detect which tiles are required for the current view extent, load the tiles and display them at the right position. Take a look at the following examples, which show different techniques to use custom image tiles:
http://openlayers.org/en/master/examples/xyz.html
http://openlayers.org/en/master/examples/xyz-esri-4326-512.html
http://openlayers.org/en/master/examples/xyz-retina.html
I am interested in creating a blending effect for a screen transition that takes the current view and pixelates the view and fades out.
The blueprint would be from Super Mario World on the Super Nintendo / Super Famicom.
I attached a YouTube video of this effekt. You can see it at 0:50 before "Mario Start" is shown.
https://www.youtube.com/watch?v=naD6mNeHIsE
I wanted to implement this blending effect in an iOS game in Objective-C or Swift. That does not matter at the moment. I am interested in how this effect can be achieved.
Anyone got a hint or an idea?
I think this can be done with next steps:
1) Take image that you want to mosaic
2) Read it pixel data colors
3) Calculate average colors based on tile size
4) Draw tiles with average color on new image
5) Display new image
6) Change tile size and repeat from step 2)
I think this scenario with using CoreGraphics will have performance problems.
Another choice - use awesome GPUImage and GPUImageMosaicFilter. Check sample code - it have GPUImageMosaicFilter implemented.
I'm new to iOS programming so sorry if this is simple.
I have an image that is 25020px wide and 238px high with 60 frames (http://imgur.com/TyPtrxy), each frame is 417px wide and 238px high. I want to show the first frame then move to the next frame based on the touch location over the image.
I've been reading around and I think it's possible with UIImageView initiated with a frame CGRect but I'm not sure how to implement this.
Can someone guide me in the right direction please? Thanks.
Take a look at this project:
https://github.com/dhoerl/PhotoScrollerNetwork
(Credit to David Hoerl who is a member of this site also!)
The idea is to use CATiledLayer to tile the large image, and then use a scroll view to move between tiles. Your use case seems very simple (move from tile to tile).
I have to implement a very custom tiled map layer/view on iOS5/6 (iPhone), which loads tiles as Images and/or data (JSON) from a server. It is like google maps, but at certain points very specific, so I cannot easily use a solution such as:
google maps or
route-me (used by MapBox)
CATiledLayer in combination with UIScrollView
The thing is: None of the solutions out there really do help me, because of my specific specs. If you think, there is a suitable solution, please tell me!!!
If not:
Help me to find the best possible solution for my case!!!
"But why can I not use these beautiful solutions?"
There are a few limits, that have to be known:
We only use 3 zoom-levels (0,1 and 2)
Every tile has a number of 9 subtiles in the next zoomlevel (= zoom factor 3) (not like most of the other kits do with 4 subtiles = zoomfactor 2)
The first layer has an initial size of (speaking in pixels / points is the half) 768*1024.
The second layer is three times wider and higher (zoomfactor 3!!!) -> 2304*3072
The third layer is equally wider and higher than the 2nd -> 6912*9216
Each tile that comes from the server is 256x256 pixels
So every sublayer 9-times the number of tiles (12 on 1st layer, 108 on 2nd, 972 on 3rd)
Every tile has a background image (ca. 6KB in size) (loaded as image from the server) and foreground-information (loaded as JSON for every tile from the server - 10-15KB in size)
-> the foreground information JSON contains either an overlay image (such as traffic in google) or local tile information to be drawn into the local tile coordinate space (like annotations, but per tile)
I want to cache the whole background-tiles on disk, as they never change
Either I want to cache the overlay-tile-images/overlay-tile-information for each tile for a certain amount of time, until it should be reloaded again
Zooming should be with pinching and double-tapping
A few of my considerations:
The caching is not the problem. I do it via CoreData or similar
I thought of a UIScrollView, to show smooth scrolling
I'd like to use pinching, so every time I break through the next zoomlevel, I have to draw the next zoom-level tiles
The content should only be drawn in the visible area (for me on iPhone 5 320x500)
Not visible tiles should be deleted, to be memory efficient. But they should not be deleted instantly, only if a tile is away a certain amount of pixels/points from the visible center. There should be a "display-cache", to instantaneously show tiles, which were just loaded and displayed. Or is there a better technique out there?
I can load the background instantly from disk or from the server asynchronously, as I know which tiles are visible. So do I with the tile-JSON. Then I extract the JSON-information for the tile ("is the overlay a direct image or information such as a city name, which I have to draw into the tile") and draw or load the overlay (from DB/Disc or Server)
Is UIScrollView efficient enough to handle the max size of the tile view
Should I use a CALayer as sublayer to draw into it? Should I use Quartz to draw directly on a big "canvas"?
Now it's your turn !!!
Apple has an example that shows zooming into a single image very deep. They use a CATiledLayer for this.
The example can be found here: http://developer.apple.com/library/ios/#samplecode/PhotoScroller/Introduction/Intro.html
The example works with subtiles that are stored on the phone but maybe it can be adapted to your problem.
I have used my own TiledLayer implementation for these type of non-standard tiling issues that CATiledLayer does not support (and because CATiledLayer still has some unsolved bugs).
That implementation is now available on Github.
Fab1n: this is same implementation that I already sent you by e-mail.
I came up with a very neat and nice solution (custom TiledLayer implementation):
I don't use CATiledLayer as contentView of the scrollView anymore. Instead I added a CALayer subclass to it and added my tiles to it as sublayers. Each tile contains the tile bitmap as contents. When zooming in with the scrollview I switch tiles manually based on the current zoomlevel. Thats perfect!!!
If somebody wants to know details of the implementation -> no problem, write a comment and I post it here.
EDIT:
Details about the implementation:
The implementation in fact is really easy:
Add a UIScrollView to your View/Controller View
Add a UIView (from now on referred to as tileView) as contentView to the ScrollView - same size as ScrollView size (this is completely zoomed out mode - factor 1)
Activate zooming (I had minimum zoom factor 1 and max. 9)
When you now place an image into the tileView, zooming into level 9 will give as a very unsharp pixelated image - that's what we want (I'll explain why)
If you like crisp clear images when you zoom in, you should add CALayer instances (tiles) with addSublayer: to tileView
Assuming you have a zoomFactor of 3 (I had - meaning 1 Tile in layer 0 - zoomFactor 1 will consist of 3x3 = 9 tiles in layer 1 - zoomFactor 3)
In layer 0 say you put 3x4 tiles with 243x243 (3 times devidable by 3) pixels (as sublayers to tileView). You put your same size tile images as CALayer contents. Zooming to zoomFactor 3 makes them blurry (like old google maps)
When you hit zoomFactor 3 remove all 12 layers from superlayer and replace them with 3 times smaller ones (81x81 pixels). You get 9 times more layers in a whole.
Now that you have 3 times smaller tile layers, the trick is the .contents property of CALayer. Even if the layer is 81x81, the contents (your image) are still full resolution, meaning if you put a 243x243 pixels image as contents of your 81x81 pixels CALayer tile, on zoomFactor 3 it looks like a 243x243 pixels tile!!!
You can go with that on any zoomFactor (3,9,12) deeper. The tiles get smaller and smaller, but setting the original image as contents, the tile will look crisp and sharp at the exact zoomFactor they are placed in.
If you like your tiles look sharp even between the zoomFactors, you have to set a 3 times taller image as .contents. Then your tile is crisp and sharp even short before you're passing the next zoomLevel.
The one thing you have to figure out is, removing all tiles on a layer when passing the zoomLevel factor is not efficient. You only have to update the visible tiles in rect.
Not the nicest of all solutions, but it works and if someone does a well designed library maybe there is potential for this to be a clean solution.