Making a "piece of paper with text on it" in OpenGL (Specifically on iOS 5) - ios

I've never done OpenGL, but I'm looking for some pointers on this particular question on an AR app I'm practicing with.
I'd like to make an app with a "flat rectangle" along with text written on the surface of the rectangle. Visually, I'm imagining something along the lines of a piece of paper with text written on it. Each time the app starts, the text would be something different (the text is pulled from a plist file).
The user would be able to view the paper from all sides, much as if there was a piece of paper hanging in front of him.
Is this trivial to do in OpenGL? How could I get started?
Sorry for the really open-ended question, but I wanted to get a feel for how this kind of thing is done.
Looking at the OpenGL template source code in the Xcode sample projects, I see that there is a big array of vertices. I presume that to create a "flat" rectangle, I'd essentally just have to remove or make the z-axis zero. And then the dynamic text that will attach to the surface of the flat rectangle...I dont have any idea how to do that......

This question is hard to answer unambiguously. In general, this is trivial, but then again it is not.
Drawing a "flat rectangle with something on it" is a couple of API calls, as simple as it can get. Drawing text in OpenGL in an efficient way, and high quality, and without big preprocessing is an entirely different story.
What I would do is render text using whatever the "normal system-supported" way is under iOS (just like you would draw in any window, I wouldn't know this specific detail), but draw into a bitmap rather than on the screen. This should be supported, pretty much every OS has supported this for at least 10-15 years. Then turn this bitmap into a texture, bind it, and draw your trivial flat quad with OpenGL (set up a vertex buffer with 4 vertices, each vertex a texture coordinate, and draw two triangles - as easy as it gets).
The huge advantage of that is that you get to use the installed system fonts (or any fonts available), you don't need to generate a bitmap font and don't need to think about really ugly things such as hinting and proper spacing, and it's much easier to mix different text styles, etc. OpenGL has built-in support for text too, of course, but it is not terribly efficient or nice either. If the text does not change every millisecond, it's really best to render it using the standard renderer that the operating system provides (yes, that probably won't be hardware accelerated, but so what... since the user must read the text, it likely won't change every millisecond).
Now it gets more complicated if your "piece of paper" should bend and twist too, or do a page peel effect rather than being just a flat rectangle. In that case you need to tesselate it, which can be harder than it sounds, too. Not all tesselations look optimal for all bends/twists, or they do but do not have the optimal (read as minimum) number of vertices.
There is an article on "page peel" and such tesselation in one of the GPU Gems or GPU Pro books, let me search...
There: Andreas Bizzotto: "A Shader-Based eBook Reader - Page peeling effect", GPU Pro2 pp. 278-299
Maybe you can get hold of a copy or are lucky enough to find it on Google Books or something.

Related

Best approach for coding a painting app on iOS / iPad

I’m trying to build a drawing/painting app for the iPad, with textured brush tips and paper.
So far, all drawing app example codes I've come across seem to work by stroking a path. However, I'd like to actually apply a texture all along the path, to simulate say, an oil brush, or charcoal.
Here is an example of a brush tip texture: Bursh tip
The result when painting with the same brush tip: Result
In the results, the top output is what it looks like when the "brush tip" texture is applied far apart along the path.
The bottom result is the texture applied with very small steps along the path. Those who've worked in Photoshop with custom brushes will find this familiar.
I had once prototyped this in Processing years ago (I've since lost the source code), and got it to work in real-time.
In Processing, I converted both the brush tip PNG and the canvas (or the image I'm painting on to) into an array of integers. Then, I simply copied the values from the brush tip to the canvas texture, at the appropriate index. At the end of the cycle, I displayed the image, for that time-step. Repeat this dozens of times in-between each point returned by the mouse.
How would I approach this in iOS, and in real-time? I tried this (https://blog.avenuecode.com/how-to-use-uikit-for-low-level-image-processing-in-swift) but it's way too slow.
This makes me believe Metal might be the only way forward. Is that true, or am complicating this unnecessarily?
Thank you for any guidance!
PS. I'm coding in Swift 5, targeting iOS 13, in Xcode 11.5.
Welcome!
I recommend you check out Core Image. It's Apple's framework for image processing (on a higher level than Metal, though it can integrate with Metal). Unfortunately, the documentation is a bit out-dated, but I'm sure you can translate it into Swift.
Here Apple describes how you would realize a painting app with Core Image and here you can download the corresponding sample project.

iOS - Displaying text with OpenGL ES 2.0

I'm surprisingly struggling a lot to display text with OpenGL ES 2.0. There are a ton of posts on stackoverflow, debating the subject, showing a few lines of code, or showing links from 2010 but working with OpenGL ES 1.x (not compatible).
But they are quite vague, and to my knowledge there is not complete code or convenient way to display text with version 2.
Do you know if there is a modern way to display text ? Like just adding a pod and writing something like this ?
[font drawText:#"This is a text" size:#12];
Thanks a lot in advance, any help would be very much liked.
EDIT 1 : I have to use OpenGL 2.0, and I can't use something else, for internal reasons.
EDIT 2 : I've found two libraries that do it :
FTGLES : It crashes at runtime when I try to use it
https://github.com/chandl34/public/tree/master/personal/c%2B%2B/Font
This one is simple but written for ES1, so I need to port the code to ES2.
EDIT 3 : ES1 and ES2 are different : ES2 works with shaders.
Since you are targeting ES2 and shaders are supported it might be a bit hard to create a tool that does this for you. But if you find such a tool it might be conflicted with your flow in many ways such as binding its own shaders and buffers, using settings on blending, depth buffer... Then if it would do that you are having trouble as you may not have a good control over how and where the text is drawn (on a 3D rotating box for instance).
But from your question (it's snippet) it seems more like all you want to do is add some 2D overlay with text which would look exactly like using an UILabel. If this is the case then I suggest you to actually use UILabel to draw these texts. You can add all of these views easily on your view that shows openGL content.
In the other case where you still need to draw text on a 3D object and want to do it very easily I suggest you to still use UILabel but create its screenshot and push it to a new (or atlas) texture. Then you can draw it as any other object. UILabel will then handle all fonts, alignments, colors, multiline and text wrapping, font size adjustments... So if you already have a system to draw a texture in the scene you should not be too far from creating yourself a tool that draws some text on the screen since you use a texture to transfer the data.
Nothing has changed since OpenGL ES1. Usually text is displayed in planar projection and created quads that are textured using font texture. There are many tutorials on this topic.
One example how to do it.
However, this is quite a lot of work when you started from scratch.
There might be better way, but depending on what you plan to do might not be suitable. You may mix UIKit with OpenGL view, having text drawn with UIKit as overlay. (UILabel, UIButton..etc.)

Composed animations, sprites in iOS

let's say I want to display a customizable (2D, cartoon-like) character, where some properties e.g. eye color, hair style, clothing etc can be chosen from a predefined set of options. Now I want to animate the character. What's the best way to deal with the customization?
1) For example, I could make a sprite sheet for each combination of properties. That's not very memory efficient and not very flexible, but probably gives the best performance.
2) I could compose the character from various layers, where each property only affects one layer. Thus, I could make a sprite-sheet for the body, a collection of sprite-sheets for the eyes (one for each eye color) etc.
2a) In that case, I could merge the selected sprite-sheets in order to generate a single sprite-sheet containing the animation of the customized character.
2b) Alternatively, I could keep the sprite-sheets separate and try to animate them simultaneously as layers. I fear, that this might become a problem performance-wise.
3) I could try to modify the layers programmatically, e.g. use a sprite-sheet for the eyes as a mask and map some texture on it before merging it down to a single sprite-sheet. I would think this is a very flexible approach when it comes to simple properties like eye colors, but might become difficult for things like hair-style. I am aware that this depends much on the character and probably a general answer is difficult.
I assume that my problem is not new, so there is probably a standard approach to it.
Concerning the platform, I'm particularly interested in iOS and try to avoid OpenGL (well, I'm open-minded). Maybe there is a nice framework that can help me here?
Thanks!
Depending on what your working on, you might want to animate part/all of the animations outside in another tool, such as flash. It is much easier to work with a visual environment.
Then there are tools that take swf files, and create sprite sheets that you would then animate in cocos2d.
That is a common game creation workflow.
You problably want to take a look on how to create sprites at cocos2d.
Cocos2d comes with a set of tools that help you to animate single parts and offers abstractions to compose parts (like CCBatchNode or CCNode). Also, it comes with tools that helps you to pack sprites into sprite sheets (e.g Texture Packer) and develop levels (e.g Level Helper).
Cocos2d is an open source framework and it is widely used. You also have cocos3d but I never used it :).

Complex Number App - graphing with core-plot, power-plot or else?

I'm coding iOS app that will explain complex numbers to the user. Complex numbers can be displayed in Cartesian coordinates and that's what I want to do; print one or more vectors on the screen.
I am looking for the easiest way to print 3 vectors into a coordinate system that will adjust itself to the vector-size (if x-coord is > y-coord adjust both axis to x-coord and vice versa).
I tried using Core Plot, which I think is way too multifunctional for my purpose.
Right now I am working with PowerPlot and my coordinate system looks okay already, but I still encounter some problems (x- and y-axis are set to the x and y values which results in a 45 degree angled line, no matter the user input).
The functionality of the examples in CorePlot and PowerPlot don't seem to meet my needs.
My last two approaches were using HTML and a web view, and doing it all myself with Quartz (not the simple way...)
Do you have any advice how to do this the simple way, as it is a simple problem, I guess?
If you're not wanting to do much actual graphing and plotting, then using Core Plot or similar sounds like overkill to me. The extra bloat of adding coreplot to your project, not to mention the time taken for you to understand how to use it, might not be worth it for some simple graphics.
Quartz is well equipped for the job of showing a few vectors on the screen, assuming you're not interested in fancy 3D graphics. There are plenty of tutorials and examples of using Core Graphics (AKA Quartz) to draw lines etc. If you're going the Quartz route, perhaps get some simple line drawing going in Quartz, then ask more questions if you need help with the maths aspect of it.
The typical technique used when rendering with Quartz is to override drawRect in a subclass of UIView and place calls to Core Graphics drawing functions in there.
A decent question and example of Quartz line drawing is here:
How do I draw a line on the iPhone?
If you aren't adverse to using Google Chart Image you can load reasonably complex data sets in a simple manner by calling the appropriate URL and then putting the image in a UIImageView. It takes very little code: here is a blog post explanation with sample code.
The limitations are
length of the data set is restricted by the max URL length you can request from Google (2048 characters, with encoding is large), though I've plotted with 120 data points in 4 series.
a net connection is required (at least to get the initial chart)
and perhaps the biggest problem, API is deprecated and will be discontinued in 2015 at some point. You would then have to switch to the UIWebView/Javascript Google Chart API implementation...
Sample image:

Procedurally animating the growing of a 2D plant

I'm trying to figure out the best way to procedurally animate the growing of a 2D plant in iOS.
I want the plant to animate to give an encroaching feeling to the user.
Basically, to animate the growing of a branch, with little buds that will eventually animate into full grown leaves.
To breathe a little life into it, I'd also like the plant to sway a bit as it grows, rather than feeling hand painted on the screen.
One way I've thought of is to use CGPaths and Bezier curves to create the shape of the stalk and the leaves, but I'm not entirely sure how to animate the drawing of the paths. Once I get the "drawing" of the stalk, i'd like to "plant" little buds at certain points on the stalk, as the line is growing/animating and these buds will also start to grow outwards from the plant.
Any suggestions on what route to take to accomplish this task? I'd prefer to procedurally animate as opposed to hand drawing each frame and animating that way. My reasoning is that I imagine procedurally animating will be less time consuming, give me more control over different aspects of the animation, and be reusable in other projects (not to mention, it will be fun to program!)
I've come across this blog posting for the drawing of animated lines:
http://oleb.net/blog/2010/12/animating-drawing-of-cgpath-with-cashapelayer/
Perhaps this would be a starting approach for achieving the results I want, I need to sit down and go through the code he posted.
Also, maybe this is something that would be easier to do using cocos2d or something similar? Or perhaps quartzcore and core animation will work fine.
Thanks for any suggestions you might have, any information is helpful at this point.
(Great question! Posting this as a "community wiki" since it is not an answer but just some references and I didn't want the links to get screwed up in comments. Perhaps people want to add to this?)
I did a simple search on "procedural tree branching code" and there were lots of interesting hits - really rich area.
A post on gameDev.stackExhange pointed to this great resource: Algorithmic Botany
Also Snappy Tree is pretty amazing and the source code is available.
These two also sound interesting:
TReal is a program capable of generating realistic 3D tree models.
Arbaro is an implementation of the tree generating algorithm described in Jason Weber & Joseph Penn: "Creation and Rendering of Realistic Trees" written in Java.
Perhaps more accessible to the OP and with a less complex result are these actionScript tutorials on fractal trees. ActionScript drawing code is pretty easily translated to Core Graphics.

Resources