Best way to draw/render grid iOS - Game of Life - ios

I am taking a stab at John Conway's game of life [wiki] & [demo]. I have developed a small program in C to calculate the next state - using a 1D array (but with 2D array logic).
I am hoping to make a small iOS app out of this (to Objective-C!), and am wondering the best and fastest way to render a grid like seen in the video. Note, it would have to render every fraction of a second and would use an array of 1's and 0's to determine a "block's" respective colour.
Edit: I'm probably looking at around 10 frames/sec, but a very large grid. It'd be rendering out hundreds of thousands of squares. Of course, if this isn't physically possible with iPhone/iPad technology then I'll reduce the grid size. It is variable without issue, just looks more 'epic' on a grand scale.
Any suggestions will help out, never touched anything of this manner before.

The best way depends on your criteria. Fastest would probably be to use OpenGL. You might even be able to write a shader to do the entire simulation. However, OpenGL is hard. Really hard.
I suspect that using Core Graphics and implementing code in a view's drawRect method that renders the array of cells onto the screen would be fast enough. It depends on how many cells you have and how many frames/second you want to draw.

Related

Whats the fastest way to a draw grid of color on IOS?

Im trying to continually update a grid of colors on my iPhone screen ( testing with 50x50, but would like to scale up later ) I have done some research but can't seem to find an agreed upon solution. I've tested CAShapes and UIBezier paths and colored UIViews, but everything is slow. Is there another option besides diving into OpenGL or Metal? Doesn't need to be crazy fast, just faster than the before-mentioned options. Thanks, I'm working in Objective-C
If you don’t want to dive into Metal then what I found much quicker for an app I wrote years ago was to put my data into a byte array and then use that array to render a bitmap image.
I don’t have all the details now. It used something like an “image provider” and various other parts. But it was much quicker than any other method I tried.
I was able to draw over 5000 “pixels” per frame using it so it should be good For you now.
Then you can either draw it into a view in drawRect or put it into a UIImageView.

Which is a better option for displaying irregular shapes in Swift?

let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.

Memory Usage of SKSpriteNodes

I'm making a tile-based adventure game in iOS. Currently my level data is stored in a 100x100 array. I'm considering two approaches for displaying my level data. The easiest approach would be to make an SKSpriteNode for each tile. However, I'm wondering if an iOS device has enough memory for 10,000 nodes. If not I can always create and delete nodes from the level data as needed.
I know this is meant to work with Tiled, but the code in there might help you optimize what you are looking to do. I have done my best to optimize for big maps like the one you are making. The big thing to look at is more so how you are creating textures I know that has been a big killer in the past.
Swift
https://github.com/SpriteKitAlliance/SKATiledMap
Object-C
https://github.com/SpriteKitAlliance/SKAToolKit
Both are designed to load in a JSON string too so there is a chance you could still generate random maps without having to use the Tiled Editor as long as you match the expected format.
Also you may want to consider looking at how culling works in the Objective-C version as we found more recently removing nodes from the parent has really optimized performance on iOS 9.
Hopefully you find some of that helpful and if you have any questions feel free to email me.
Edit
Another option would be to look at Object Pooling. The core concept is to create only sprites you need to display and when you are done store them in a collection of sorts. When you need a new sprite you ask the collection for one and if it doesn't have one you create a new one.
For example you need a grass tile and you ask for one and it doesn't have one that has been already created that is waiting to be used so it creates one. You may do this to fill a 9 x 7 grid to fill up your screen. As you move away grass that gets moved off screen gets tossed into the collection to be used again when the new row comes in and needs grass. This works really well if all you are doing is displaying tiles. Not so great if tiles have dynamic properties that need to be updated and are unique in nature.
Here is a great link even if it is for Unity =)
https://unity3d.com/learn/tutorials/modules/beginner/live-training-archive/object-pooling

How to implement Megatextures

I am interested in roughly how megatextures are/could be implemented on iOS.
In particular I am making a 2D platformer with a large (non-tiled) background and I would like to have one (precalculated, unreasonably large) image that is mapped to the background. One option I have gone with is to chop the precalulated image into tiles, and load/unload in the background.
I am however curious about megatextures. It would be far more convenient to map these all to one surface. Are megatextures simply another way of phrasing what I am doing right now, or is something more cunning going on. Is there one superlarge texture on the graphics card with multiple gltexsubimage2d calls going on?
Megatexture is a well-developed and advanced implementation of clip-mapping technique: http://en.wikipedia.org/wiki/Clipmap
So yes, basically it is a continuous background loading of content to be displayed and unloading of currently unused content.

Making a "piece of paper with text on it" in OpenGL (Specifically on iOS 5)

I've never done OpenGL, but I'm looking for some pointers on this particular question on an AR app I'm practicing with.
I'd like to make an app with a "flat rectangle" along with text written on the surface of the rectangle. Visually, I'm imagining something along the lines of a piece of paper with text written on it. Each time the app starts, the text would be something different (the text is pulled from a plist file).
The user would be able to view the paper from all sides, much as if there was a piece of paper hanging in front of him.
Is this trivial to do in OpenGL? How could I get started?
Sorry for the really open-ended question, but I wanted to get a feel for how this kind of thing is done.
Looking at the OpenGL template source code in the Xcode sample projects, I see that there is a big array of vertices. I presume that to create a "flat" rectangle, I'd essentally just have to remove or make the z-axis zero. And then the dynamic text that will attach to the surface of the flat rectangle...I dont have any idea how to do that......
This question is hard to answer unambiguously. In general, this is trivial, but then again it is not.
Drawing a "flat rectangle with something on it" is a couple of API calls, as simple as it can get. Drawing text in OpenGL in an efficient way, and high quality, and without big preprocessing is an entirely different story.
What I would do is render text using whatever the "normal system-supported" way is under iOS (just like you would draw in any window, I wouldn't know this specific detail), but draw into a bitmap rather than on the screen. This should be supported, pretty much every OS has supported this for at least 10-15 years. Then turn this bitmap into a texture, bind it, and draw your trivial flat quad with OpenGL (set up a vertex buffer with 4 vertices, each vertex a texture coordinate, and draw two triangles - as easy as it gets).
The huge advantage of that is that you get to use the installed system fonts (or any fonts available), you don't need to generate a bitmap font and don't need to think about really ugly things such as hinting and proper spacing, and it's much easier to mix different text styles, etc. OpenGL has built-in support for text too, of course, but it is not terribly efficient or nice either. If the text does not change every millisecond, it's really best to render it using the standard renderer that the operating system provides (yes, that probably won't be hardware accelerated, but so what... since the user must read the text, it likely won't change every millisecond).
Now it gets more complicated if your "piece of paper" should bend and twist too, or do a page peel effect rather than being just a flat rectangle. In that case you need to tesselate it, which can be harder than it sounds, too. Not all tesselations look optimal for all bends/twists, or they do but do not have the optimal (read as minimum) number of vertices.
There is an article on "page peel" and such tesselation in one of the GPU Gems or GPU Pro books, let me search...
There: Andreas Bizzotto: "A Shader-Based eBook Reader - Page peeling effect", GPU Pro2 pp. 278-299
Maybe you can get hold of a copy or are lucky enough to find it on Google Books or something.

Resources