non-destructive filter in OpenGL - ios

I am making a photography app for the iPhone and will be using OpenGL to apply effects to the images. Now I'm a bit of an OpenGL noob and was wondering is there a way to build a filter(saturation & blur) that can be easily reversed?
To explain, the user takes a picture and then applies a blur of 5 and a saturation of 3(arbitrary values), but then comes back and turns it down to a blur of 3 and a saturation of 2, would the result be same as if he had given the original image a blur of 3 and a saturation of 2?

Save the original image and store the filter changes as an array of instructions that you can replay at a later date. This will also give you selective undo ability.

You cannot redo filters like blur. Such filters looses some of information about the image so it is hard to get it back. See discussion here.
Using OpenGL (or any other Api) you can easily apply filter as "postprocessing" effects. Just render a quad with your texture to some render target and then you will have transformed image as an output.
Here is a link to oZone3D how to do that.
You can save the created output (but as some other filename!).

Non-destructive editing is API agnostic, you can implement it with OpenGL or in software or anything, all you really need to do is keep aside the source data instead of overwriting it. You can even push back "history" to disk to avoid bloating the ram and gpu memory.
From the context of your question I can assume you are using some of the out-of-the-box ready-to-use functions apple provides for their API, it this case you rely on a stock implementation, so you are stuck with its destructive behavior until you come up with something better yourself.

Related

Best approach for coding a painting app on iOS / iPad

I’m trying to build a drawing/painting app for the iPad, with textured brush tips and paper.
So far, all drawing app example codes I've come across seem to work by stroking a path. However, I'd like to actually apply a texture all along the path, to simulate say, an oil brush, or charcoal.
Here is an example of a brush tip texture: Bursh tip
The result when painting with the same brush tip: Result
In the results, the top output is what it looks like when the "brush tip" texture is applied far apart along the path.
The bottom result is the texture applied with very small steps along the path. Those who've worked in Photoshop with custom brushes will find this familiar.
I had once prototyped this in Processing years ago (I've since lost the source code), and got it to work in real-time.
In Processing, I converted both the brush tip PNG and the canvas (or the image I'm painting on to) into an array of integers. Then, I simply copied the values from the brush tip to the canvas texture, at the appropriate index. At the end of the cycle, I displayed the image, for that time-step. Repeat this dozens of times in-between each point returned by the mouse.
How would I approach this in iOS, and in real-time? I tried this (https://blog.avenuecode.com/how-to-use-uikit-for-low-level-image-processing-in-swift) but it's way too slow.
This makes me believe Metal might be the only way forward. Is that true, or am complicating this unnecessarily?
Thank you for any guidance!
PS. I'm coding in Swift 5, targeting iOS 13, in Xcode 11.5.
Welcome!
I recommend you check out Core Image. It's Apple's framework for image processing (on a higher level than Metal, though it can integrate with Metal). Unfortunately, the documentation is a bit out-dated, but I'm sure you can translate it into Swift.
Here Apple describes how you would realize a painting app with Core Image and here you can download the corresponding sample project.

How to detect text in a photo

I am researching into the best way to detect test in a photo using open source libraries.
I think the standard way is as follows (note: steps 1 - 4 all use OpenCV):
1) detect outline of document
2) transform document so it's flat and cropped, using said outline
3) Make the background of document white, using a filter
4) Feed resulting image to Tesseract
Is this the optimum process, or is there a better way, or better tools?
Also, what happens for case if the photo doesn't have a document outline (It's possible that step 1 & 2 are redundant)?
Is there anyway to automatically detect document orientation (i.e. portrait / landscape)?
I think your process is fine. I've used a similar process for an Android project.
I think that the only way you can discover if a document is portrait/landscape is to reason with the length of the sides of the bounding box of your outline.
I don't think there's an automatic way to do this, maybe you can find the most external contour approximable with a 4 segment polyline (all doable in opencv). In order to get this you'll have to work with contour hierarchy and contous approximation (see cv2.approxPolyDP).
This is how I would go for automatic outline detection. As I said, the rest of your algorithm seems just fine to me.
PS. I'll leave my Android project GitHub link. I don't know if it can be useful to you, but here I specify the outline by dragging some handles, then transform the image and feed it to Tesseract, using Java and OpenCV. Yeah It's a very bad idea to do that in the main thread of an Android app and yeah, the app is not finished. I just wanted to experiment with OCR, so I didn't care much of performance and usability, since this was not intended to use, but just for studying.
Look up the uniform width transform.
What this does is detect edges which have more or less the same width with respect to their opposite edge. So things like drainpipes (which can be eliminated at a later pass) but also the majority of text. Whilst conceptually it's similar to a distance transform, the published method uses rather ad hoc normal projection methods and Canny edge detection.

How to implement Megatextures

I am interested in roughly how megatextures are/could be implemented on iOS.
In particular I am making a 2D platformer with a large (non-tiled) background and I would like to have one (precalculated, unreasonably large) image that is mapped to the background. One option I have gone with is to chop the precalulated image into tiles, and load/unload in the background.
I am however curious about megatextures. It would be far more convenient to map these all to one surface. Are megatextures simply another way of phrasing what I am doing right now, or is something more cunning going on. Is there one superlarge texture on the graphics card with multiple gltexsubimage2d calls going on?
Megatexture is a well-developed and advanced implementation of clip-mapping technique: http://en.wikipedia.org/wiki/Clipmap
So yes, basically it is a continuous background loading of content to be displayed and unloading of currently unused content.

iOS - Interface design, images or custom drawing?

I've been looking at a lot of iOS user interfaces that have been customized. I wonder, is it better to customize the UI using images or using libraries like CoreGraphics and Quartz, or is it on a per case basis, as in I use libs for some elements and images for others?
It is very hard to guess your particular situation. I can state that iOS gives us a lot of leverages to make any custom interface. I would use:
images for complicated graphic elements, buttons, icons, arrows, etc.
images + stretching to get complicated backgrounds/elements
custom drawing all that contain lines, ellipses, squares, lineral and/or circular gradients, simple image preprocessing, etc.
The key idea is - to find balance between memory usage and processing time. Note: from my experience - interfaces based on images which created by professional designer looks awesome.
Case-by-case basis. Images can be drawn more quickly but use more memory; custom drawing, whether via Core Graphics or Quartz, uses less memory but takes more time.
Case by case. If you want a lot of complex graphics that aren't lines and don't change much, use images. If you just need lines/gradients, or if you want things to move and morph, you'll need to use quartz.
It depends on you, as well. Would you rather write code for quartz for an hour and debug it, or would you rather spend an hour in photoshop? How fast are you at PS? Do you already know Quartz?
It depends on a lot of things, so "case-by-case".
Determine the complexity of each approach. (nontrivial) Icons are a good example of an image, while large gradients are a good use for drawing. Drawing can take some time/experience to get right, compared to graphic assets, but you can reuse that implementation later and use less memory in many cases (images can also use less memory - depending on what you're drawing). Complex static images can take time to render if drawn so... there are a number of things to consider in order to achieve the best balance. Using the gradient vs. image example, quality and time are also factors -- resizing/scaling a simple image can take a lot of CPU or have artifacts a rendered gradient would not have. Much of it comes down to experience, knowing the implementations you use well, and a lot of sampling/profiling to determine what is simple/complex/consumes a lot of memory, and so on.

Making a "piece of paper with text on it" in OpenGL (Specifically on iOS 5)

I've never done OpenGL, but I'm looking for some pointers on this particular question on an AR app I'm practicing with.
I'd like to make an app with a "flat rectangle" along with text written on the surface of the rectangle. Visually, I'm imagining something along the lines of a piece of paper with text written on it. Each time the app starts, the text would be something different (the text is pulled from a plist file).
The user would be able to view the paper from all sides, much as if there was a piece of paper hanging in front of him.
Is this trivial to do in OpenGL? How could I get started?
Sorry for the really open-ended question, but I wanted to get a feel for how this kind of thing is done.
Looking at the OpenGL template source code in the Xcode sample projects, I see that there is a big array of vertices. I presume that to create a "flat" rectangle, I'd essentally just have to remove or make the z-axis zero. And then the dynamic text that will attach to the surface of the flat rectangle...I dont have any idea how to do that......
This question is hard to answer unambiguously. In general, this is trivial, but then again it is not.
Drawing a "flat rectangle with something on it" is a couple of API calls, as simple as it can get. Drawing text in OpenGL in an efficient way, and high quality, and without big preprocessing is an entirely different story.
What I would do is render text using whatever the "normal system-supported" way is under iOS (just like you would draw in any window, I wouldn't know this specific detail), but draw into a bitmap rather than on the screen. This should be supported, pretty much every OS has supported this for at least 10-15 years. Then turn this bitmap into a texture, bind it, and draw your trivial flat quad with OpenGL (set up a vertex buffer with 4 vertices, each vertex a texture coordinate, and draw two triangles - as easy as it gets).
The huge advantage of that is that you get to use the installed system fonts (or any fonts available), you don't need to generate a bitmap font and don't need to think about really ugly things such as hinting and proper spacing, and it's much easier to mix different text styles, etc. OpenGL has built-in support for text too, of course, but it is not terribly efficient or nice either. If the text does not change every millisecond, it's really best to render it using the standard renderer that the operating system provides (yes, that probably won't be hardware accelerated, but so what... since the user must read the text, it likely won't change every millisecond).
Now it gets more complicated if your "piece of paper" should bend and twist too, or do a page peel effect rather than being just a flat rectangle. In that case you need to tesselate it, which can be harder than it sounds, too. Not all tesselations look optimal for all bends/twists, or they do but do not have the optimal (read as minimum) number of vertices.
There is an article on "page peel" and such tesselation in one of the GPU Gems or GPU Pro books, let me search...
There: Andreas Bizzotto: "A Shader-Based eBook Reader - Page peeling effect", GPU Pro2 pp. 278-299
Maybe you can get hold of a copy or are lucky enough to find it on Google Books or something.

Resources