UIImage behind GLKView - ios

i'm trying to create a signature with touch inputs using a GLKView.
But now i need a UIImage to be below the signature.
Short: I want to draw lines above a UIImage using a custom GLKView.
The problem is that my line gets drawn below the image every time, no matter if i set opaque to NO and insertSubview: belowSubview..
Otherwise i tried with the help of textures but i have no idea how to do this..
I do not want to use a GLKViewController if it is possible ;)
Thanks in advance!
Update:
I found my problem and now i get the result that i wanted to have.
Inside the GLKView in the Constructor i initiate the EAGLContext.
I forgot to set the context to self.
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (context) {
self.opaque = NO; // set view transparent
self.context = context; // set the context to itself
}
Although setting opaque to NO is not a good solution it is the only efficient solution for my task.

There are a couple of ways worth looking at to do this.
One is to look at view containment instead of layering — make the GLKView a subview of the view you're drawing a UIImage in.
The other is to draw the image in your GLKView using OpenGL ES. It's a little more work, but not too hard if you look over the documentation and the answers already here on SO. And it has some extra benefits: since both the background image and your drawing are going into the same framebuffer, you can control blending in GL. Here's some tips for getting started:
Use GLKTextureLoader to get your image into an OpenGL ES texture.
Set up a GLKBaseEffect instance for drawing with your texture. Don't forget to tell it to prepareToDraw.
Draw a quad using the texture and effect. This answer has a pretty decent starting point for doing that.
After drawing the background image, draw your signature and it'll be on top of the image.

Related

Set blendmodes on UIImageViews like Photoshop

I've been trying to apply blend modes to my UIImageViews to replicate a PSD mock up file (sorry can't provide). The PSD file has 3 layers, a base color with 60% normal blend, an image layer with 55% multiply blend and a gradient layer with 35% overlay.
I've been trying several tutorials over the internet but still could not get the colors/image to be exactly the same.
One thing I noticed is that the color of my iPhone is different from my Mac's screen.
I found the documentation for Quartz 2D which I think is the right way to go, but I could not get any sample/tutorial about Using Blend Modes with Images.
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBIJEFG
Can anyone provide a good tutorial that does the same as the one in the documentation so I could atleast try to mix things up should nobody provide me a straight forward answer to my question.
This question was asked ages ago, but if someone is looking for the same anwser, you can user compositingFilter on the backing layer of your view to get what you want:
overlayView.layer.compositingFilter = "multiplyBlendMode"
Suggested by #SeanA, here: https://stackoverflow.com/a/46676507/1147286
Filters
Complete list of filters is here
Or you can print out the compositing filter types with
print("Filters:\n",CIFilter.filterNames(inCategory: kCICategoryCompositeOperation))
Now built-in to iOS. Historic answer:
You don't want to use UIImageView for this since it doesn't really support blend modes.
Instead, the way to go would be to create a UIView subclass (which will act just like UIImageView). So make a UIView subclass called something like BlendedImageView. It should have an image property, and then in the drawRect: method you can do this:
- (void)drawRect:(CGRect)rect
{
//
// Here you need to calculate a rect based on self.contentMode to replicate UIImageView-behaviour.
// For example, for UIViewContentModeCenter, you want a rect that goes outside 'rect' and centers with it.
//
CGRect actualRect = // ...
[self.image drawInRect:actualRect blendMode:kCGBlendModeOverlay alpha:0.5];
}
Replace alpha and blend mode to fit your preference. Good practice would be to let your BlendedImageView save a blendMode property.
It may be that in your case, you need to do a bit more advanced view that draws all 3 of your images, using the same code as above.

OpenGL transparent background Xamarin

I'm building an application who stream the content of the camera on the screen. Now I want to add a OpenGL element on the screen. My OpenGL element is an arrow that I create in a other class, that inherit GLKView call TriangleView. My ViewController is a GLKViewController. I add my Triangle View like this
View = triangleView = new TriangleView (View.Frame);
That draw my GLKView, but I would like to have a transparent background for this GLKView in order to see my camera stream. I'm beginning in OpenGL so I don't really know how to do that. I found some post that say to put an alpha on itGL.ClearColor (1.0f, 0f, 0f, 0f);,in the Draw function of my GLKView, but this doesn't work.
Any help or comment is welcome.
Thank in advance
You may also need to set the GLKView's opaque property to NO so the alpha blending can work.

Performance UIImageView vs UIView with QuartzCore

So I just discovered QuartzCore, and I am now considering to replace a UIImageView containing a bitmap with a UIView subclass doing things like
CGContextFillEllipseInRect(contextRef, rect);
They would look exactly the same: the bitmap is just a little filled circle.
The very view I'm replacing is playing an important role in my app: it's being dragged around a lot.
Question: performance wise: should I bother? I can imagine that the vector circle is being recalculated all of the time, while the bitmap is just buffered. Or that the vector is easier to digest than a bitmap.
Can anyone advise?
thanks ahead
All UIView's on iOS are layer backed. So drawRect will only be called once and you will draw to the CALayer backing the view. You can have it draw again by calling setNeedsDisplay. When you are dragging the view around and drawing it, the view will render from the layer backing. Using a UIImageView is also layer backed and so the end result should be two layer backed views. The one place where you may see a difference is in low memory situations when the view is not visible (though I am not sure).

On iPhone and iPad, can we draw anything without using drawRect?

It seems that the standard way to draw dots, lines, circles, and Bezier paths is to draw them in inside of drawRect. We don't directly call drawRect, but just let iOS call it and we can use [self setNeedsDisplay] to tell iOS to try to call drawRect when it can...
It also seems that we cannot rely on
[self setClearsContextBeforeDrawing: NO];
to not clear the background of the view before calling drawRect. Some details are in this question: UIView: how to do non-destructive drawing?
How about directly drawing on the screen -- without putting those code in drawRect. For example, in ViewController.m, have some code that directly draw dots, lines, circles on the screen. Is that possible?
Without having to drop into OpenGL, the closest you can do to get around the erasure is to convert the context as an image using something like CGBitmapContextCreateImage. From there, you can retain the image in memory (or write it to disk if necessary), and then when you redraw the view, you first draw this original image into the context and then overlay it with new content.

iOS : need inputs in developing efficient ( performance wise ) drawing app

I have this app using which one can draw basic shapes like rectangle, eclipse, circle, text etc.
I also allow free form drawing, which is stored as set-of-points, on the canvas.
Also a user can resize and move around these objects by operating on the selection handles that appear when an object is selected.
In addition the user should be able to zoom and pan the canvas.
I need some inputs on how to efficiently implement this drawing functionality.
I have following things in mind -
Use UIView's InvalidateRect and drawRect
Have a UIView for the main canvas and for each inserted object - invalidate the correspoding rect and redraw all the objects which intersects that rect in the drawRect function of the UIView.
Have a UIView and use CALayer ?
every one keep mentioning about the CALayer , I dont have much idea on this, before I venture into this I wanted a quick input on whether this route is worth taking.
like, https://developer.apple.com/library/ios/#qa/qa1708/_index.html
Have a UIImageView as canvas and when drawing each object, we do this
i) Draw the object into offscreen CGContext, basically, create a new CGContext by using UIGraphicsBeginImageContext, draw the shape, extract the image out of this CG context and use that as source of UIImageView's image property, but here how do I invalidate only a part of the UIImageView so that only that area gets refreshed.
Could you please suggest what is the best approach?
Is there any other efficient way to get this done?
Thanks.
Using a UIImage is more efficient for rendering multiple objects. But Using a CALayer is more efficient when moving and modifying a single object because you don't have to modify the other objects. So I think the best approach is to use a UIImage for general drawing and a CALayer for the shape that is being modified. In other words:
use a CALayer to draw the shape being added or modified, but don't draw it on the UIImage
use a UIImage to draw the other shapes
But OpenGL is still the most efficient solution, but don't bother with that if you don't have too many objects to draw.
If you want to draw polygons, you'll have to use Quartz framework, and have your drawing methods based on CALayer. It doesn't really matter which view you'll put your CALayers in, UIImageView or UIView. I'll say UIView since you won't be needing UIImageView's properties or methods for drawing.

Resources