Japanese Woodblock Effect using CoreImage or GPUImage - ios

I am trying to create an app that simulates a woodblock print effect similar to the app Moku Hanga. I have tried many combinations of the built in CoreImage and GPUImage filters but have not had success. I don't have any experience with OpenGL and GLSL, but I understand that it is possible to write custom CoreImage Kernels in iOS8 and custom fragment shaders in GPUImage. I am learning more about the iOS graphics pipeline and OpenGL ES shaders, but I'll still have to understand more about image manipulation so I can mimic this effect.
Does anyone have recommendations on how I could simulate the Moku Hanga effect using one of these frameworks or approaches (filter composition or custom shader)?

Related

How to use Stencil and Cover method to draw SVGs with metal ios?

I did some research on how to render vector graphics with metal. But unfortunately i couldn't find any supporting resources. But there was an approach in OpenGL which is published by NVIDIA, to render SVGs using stencil and cover method. Read more here http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/opengl/gpupathrender.pdf
. I am wondering if someone could help me out here to find out possibilities of rendering vector graphics using this approach with metal.
There is no easy way implementing something this. Because Metal does not support vectors. You might have to calculate everything on CPU and rasterize.

How to make custom camera lens effects in ios

I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance
You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.
This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.

Developing Shaders With SpriteKit

I've read that some of the downfalls of SpriteKit is that you're unable to develop shaders if you use it.
However, I read a post here on SO that suggest otherwise:
How to apply full-screen SKEffectNode for post-processing in SpriteKit
Can you develop your own shaders if you decide to use SpriteKit?
Thanks
It is not supported in iOS 7, but iOS 8 will support custom shaders. For more information, view the pre-release documentation of SKShader.
An SKShader object holds a custom OpenGL ES fragment shader. Shader objects are used to customize the drawing behavior of many different kinds of nodes in Sprite Kit.
Sprite Kit does not provide an interface for using custom OpenGL shaders. The SKEffectNode class lets you use Core Image filters to post-process parts of a Sprite Kit scene, though. Core Image provides a number of built-in filters that might do some of what you're after, and on OS X you can create custom filter kernels using a language similar to GLSL.

OpenGL ES 2.0 plane morph/distortion effect GPUImage iOS

was playing a bit with awesome GPUImage framework and was able to reproduce some "convex"-like effects with fragment shaders.
However, I'm wondering if it's possible to get some more complex plane curving in 3D using GPUImage or any other OpenGL rendering to texture.
The effect I'm trying to reach looks like this one - is there any chance I can get something alike using depth buffer and vertex shader - or just need to develop some more sophisticated fragment shader emulating Z coordinate?
This is what I get now using only fragment shader and some periodical bulging
Thanks
Well another one thought is maybe it's possible to prototype a curved surface in some 3d modeling app and somehow map the texture to it?

Photo booth in iOS. Using OpenCV or OpenGL ES?

I want to make an application filtering videos like Apple's photo booth app
How can I make that??
Using OpenCV, OpenGL ES or anything else?
OpenCV and OpenGL have very different purposes:
OpenCV is a cross-platform computer vision library. It allows you to easily work with image and video files and presents several tools and methods to handle them and execute filters and several other image processing techniques and some more cool stuff in images.
OpenGL is a cross-platform API to produce 2D/3D computer graphics. It is used to draw complex three-dimensional scenes from simple primitives.
If you want to perform cool effects on images OpenCV is the way to go since it provides tools/effects that can be easily used together to achieve the desired effect you are looking for. And this approach doesn't stop you from processing the image with OpenCV and then render the result in a OpenGL window (if you have to). Remember, they have different purposes and every now and then somebody uses them together.
The point is that the effects you want to perform in the image should be done with OpenCV or any other image processing library.
Actually karlphillip, all though what you have said is correct, OpenGL can also be used to perform hardware accelerated image processing.
Apple even has an OpenGL sample project called GLImageProcessing that has hw accelerated brightness, contrast, saturation, hue and sharpness.

Resources