How to implement Megatextures - ios

I am interested in roughly how megatextures are/could be implemented on iOS.
In particular I am making a 2D platformer with a large (non-tiled) background and I would like to have one (precalculated, unreasonably large) image that is mapped to the background. One option I have gone with is to chop the precalulated image into tiles, and load/unload in the background.
I am however curious about megatextures. It would be far more convenient to map these all to one surface. Are megatextures simply another way of phrasing what I am doing right now, or is something more cunning going on. Is there one superlarge texture on the graphics card with multiple gltexsubimage2d calls going on?

Megatexture is a well-developed and advanced implementation of clip-mapping technique: http://en.wikipedia.org/wiki/Clipmap
So yes, basically it is a continuous background loading of content to be displayed and unloading of currently unused content.

Related

Which is a better option for displaying irregular shapes in Swift?

let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.

Memory Usage of SKSpriteNodes

I'm making a tile-based adventure game in iOS. Currently my level data is stored in a 100x100 array. I'm considering two approaches for displaying my level data. The easiest approach would be to make an SKSpriteNode for each tile. However, I'm wondering if an iOS device has enough memory for 10,000 nodes. If not I can always create and delete nodes from the level data as needed.
I know this is meant to work with Tiled, but the code in there might help you optimize what you are looking to do. I have done my best to optimize for big maps like the one you are making. The big thing to look at is more so how you are creating textures I know that has been a big killer in the past.
Swift
https://github.com/SpriteKitAlliance/SKATiledMap
Object-C
https://github.com/SpriteKitAlliance/SKAToolKit
Both are designed to load in a JSON string too so there is a chance you could still generate random maps without having to use the Tiled Editor as long as you match the expected format.
Also you may want to consider looking at how culling works in the Objective-C version as we found more recently removing nodes from the parent has really optimized performance on iOS 9.
Hopefully you find some of that helpful and if you have any questions feel free to email me.
Edit
Another option would be to look at Object Pooling. The core concept is to create only sprites you need to display and when you are done store them in a collection of sorts. When you need a new sprite you ask the collection for one and if it doesn't have one you create a new one.
For example you need a grass tile and you ask for one and it doesn't have one that has been already created that is waiting to be used so it creates one. You may do this to fill a 9 x 7 grid to fill up your screen. As you move away grass that gets moved off screen gets tossed into the collection to be used again when the new row comes in and needs grass. This works really well if all you are doing is displaying tiles. Not so great if tiles have dynamic properties that need to be updated and are unique in nature.
Here is a great link even if it is for Unity =)
https://unity3d.com/learn/tutorials/modules/beginner/live-training-archive/object-pooling

Best way to draw/render grid iOS - Game of Life

I am taking a stab at John Conway's game of life [wiki] & [demo]. I have developed a small program in C to calculate the next state - using a 1D array (but with 2D array logic).
I am hoping to make a small iOS app out of this (to Objective-C!), and am wondering the best and fastest way to render a grid like seen in the video. Note, it would have to render every fraction of a second and would use an array of 1's and 0's to determine a "block's" respective colour.
Edit: I'm probably looking at around 10 frames/sec, but a very large grid. It'd be rendering out hundreds of thousands of squares. Of course, if this isn't physically possible with iPhone/iPad technology then I'll reduce the grid size. It is variable without issue, just looks more 'epic' on a grand scale.
Any suggestions will help out, never touched anything of this manner before.
The best way depends on your criteria. Fastest would probably be to use OpenGL. You might even be able to write a shader to do the entire simulation. However, OpenGL is hard. Really hard.
I suspect that using Core Graphics and implementing code in a view's drawRect method that renders the array of cells onto the screen would be fast enough. It depends on how many cells you have and how many frames/second you want to draw.

iOS Heavy image switching

I'm developing a app that will showcase products. One of the features of this app is that you will be able to "rotate" the product, using your finger/Pan-Gesture.
I was thinking in implementing this by taking photos of the product from different angles so when you "drag" the image, all I would have to do is switch the image according. If you drag a little, i switch only 1 image... if you drag a lot, i will switch them in cadence making it look like a movie... but i have a concerns and a probable solution:
Is this "performatic"? Since its a art/museum product showcase, the photos will be quite large in size/definition, and loading/switching when "dragged a lot" might be a problem because it would cause "flickering"... And the solution would be: instead of loading pic-by-pic i would put them all inside one massive sheet, and work through them as if they were a sprite...
Is that a good ideia? Or should I stick with the pic-by-pic rotation?
Edit 1: There`s a complicator: the user will be able to zoom in/out and to rotate the product in any axis (X, Y and Z)...
My personal opinion, I don't think this will work the way you hope or the performance and/or aesthetics will not be what you want.
1) Taking individuals shots that you then try to keyframe to based on touch events won't work well because you will have inevitable inconsistencies in 'framing' the shots such that the playback won't be smooth
2) The best way to do this, I suspect, will be to shoot it with video and shoot it with some sort of rig that allows you to keep the camera fixed while rotating the object
3) I'm pretty sure this is how most 'professional' grade product carousel type presentations work
4) Even then you will have more image frames than you need -- not sure whether you plan to embed the images files in app or download on demand -- but that is also a consideration in terms of how much downsampling you'll need to do to reduce frames/file size
Suggestion
Look at shooting these as video (somewhat like described above) and downsampling and removing excess frames using a video editor. Then you could use AVFoundation for playback and use your gestures to 'scrub' into the video frames. I worked on something like this for HTML playback at a large company and I can assure you it was done with video.
Alternatively, if video won't work for you. Your sprite sheet solution might work (consider using SpriteKit). But then keep in mind what I said about trying to keyframe one off camera shots together -- it just won't work well. Maybe a compromise would be to shoot static images but do so by fixing the camera and rotating the objects at very specific increments. That could work as well I suppose but you will need to be very careful about light and other atmospehrics. It doesn't take much variation at all to be detectable to the human eye causing the whole presentation to seem strange. Good luck.
A coder from my company did something like this before using 360 images of an object and it worked just great but it didn't have zoom. Maybe you could add zoom by adding a pinch gesture recognizer and placing the image view into a scroll view to zoom in on the static image.
This scenario sounds like what you really need is a simple 3D model loader library or write it in OpenGL yourself. But this pan and zoom behavior is really basic when you make that jump to 3D so it should be easy to find lots of examples.
All depends on your situation and time constraints :)

iOS - Interface design, images or custom drawing?

I've been looking at a lot of iOS user interfaces that have been customized. I wonder, is it better to customize the UI using images or using libraries like CoreGraphics and Quartz, or is it on a per case basis, as in I use libs for some elements and images for others?
It is very hard to guess your particular situation. I can state that iOS gives us a lot of leverages to make any custom interface. I would use:
images for complicated graphic elements, buttons, icons, arrows, etc.
images + stretching to get complicated backgrounds/elements
custom drawing all that contain lines, ellipses, squares, lineral and/or circular gradients, simple image preprocessing, etc.
The key idea is - to find balance between memory usage and processing time. Note: from my experience - interfaces based on images which created by professional designer looks awesome.
Case-by-case basis. Images can be drawn more quickly but use more memory; custom drawing, whether via Core Graphics or Quartz, uses less memory but takes more time.
Case by case. If you want a lot of complex graphics that aren't lines and don't change much, use images. If you just need lines/gradients, or if you want things to move and morph, you'll need to use quartz.
It depends on you, as well. Would you rather write code for quartz for an hour and debug it, or would you rather spend an hour in photoshop? How fast are you at PS? Do you already know Quartz?
It depends on a lot of things, so "case-by-case".
Determine the complexity of each approach. (nontrivial) Icons are a good example of an image, while large gradients are a good use for drawing. Drawing can take some time/experience to get right, compared to graphic assets, but you can reuse that implementation later and use less memory in many cases (images can also use less memory - depending on what you're drawing). Complex static images can take time to render if drawn so... there are a number of things to consider in order to achieve the best balance. Using the gradient vs. image example, quality and time are also factors -- resizing/scaling a simple image can take a lot of CPU or have artifacts a rendered gradient would not have. Much of it comes down to experience, knowing the implementations you use well, and a lot of sampling/profiling to determine what is simple/complex/consumes a lot of memory, and so on.

Resources