iOS Swift - How to draw simple 2d Image with metal? - ios

In my project i need to draw 2D image in realtime corresponding with UIGestureRecognizer updates .
The image would be the same UIImage , drawing on various positions.
let arrayOfPositions = [pos1,pos2,pos3]
And i need to transfer the result image on MetalLayer into Single UIImage , the result image will have the same size as device's screen.
something similar to
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
I'm new to Metal and after watching realm's video and apple documentation my sanity went to chaos . Most tutorials focus on 3D rendering which is beyond my need (and my knowledge)
If anyone would help me a simple code how to draw UIImage in to MetalLayer , then convert the whole into single UIImage as a result ? thanks

Related

how to merge two views into one single view iOS objective-c (as shown in the image)?

I want to make a play button like the following image.
I managed to make something similar to this.
Questions:
In the first image ,the circular play button and the rectangular background of the play label are are of a single image or view.I managed to make a similar one using two views as shown below.Using corner radius property of the view layers.The issue is that ,when we apply the alpha values to both the views,One view appears to overlap the other and that area appears to be dark.SO they both appears to be two different views.
How can I resolve this issue?
You have to merge your two images into one. Assuming that you have your two images already created (let's assume .png-s somewhere in your project), I would try something along these lines:
// "Load" original images
let image1 = UIImage(named: "img1.png")
let image2 = UIImage(named: "img2.png")
// Crate size for the resulting image and create context
let size: CGSize = CGSizeMake(100, 100);
UIGraphicsBeginImageContext(size);
// Bottom image - draw in context
image1?.drawInRect(CGRectMake(0,0,size.width,size.height))
// Top image - draw in context
image2?.drawInRect(CGRectMake(0,0,size.width,size.height))
// Create new image with context
let newImage = UIGraphicsGetImageFromCurrentImageContext()
// End context
UIGraphicsEndImageContext()
After that adjust the alpha of the resulting image. If the original images already have an alpha < 1 this might not work, but worth checking.
Therefore you will have to rearrange your code as I don't really think that views (or CLLayers for that matters) support blend modes in iOS.

IOS - CoreImage composite one image to another image at specific location

Sorry if I am posting a duplicated question.
I am newbie in IOS programming with CoreImage. I am going to composite an image over another image.
The problem is the Composite filters from Core image don't support a composition at specific location. It is compositing the two images at the location (0,0) only.
For example, I need to composite an image at the location (100,100) of another image, not at the left-top corner.
Any suggestion for this issue must be appreciated.
What you want to do, I believe, is move the image you want offset to the correct location before applying the composite filter. You can create a CGAffineTransform to do this like so:
CGAffineTransform transform = CGAffineTransformMakeTranslation(100, 100);
CIImage *adjustedImage = [inputImage imageByApplyingTransform:transform];
Alternatively there is a CIAffineTransform filter which does the same job. Once you've got your transformed image, you can comp it with the other image normally and it should appear offset in the output.

Fill color on specific portion of image?

I want to fill specific color on specific area of an image.
EX:
In above Joker image, If touch on hair of Joker then fill specific color on hair Or touch on nose then fill specific color on nose.. etc. I hope may you understand what I am trying to say.
After googling it's may be achieve by use of UIBezierPath or CGContext Reference but I am very new for it, I tried to read this documentation but I do not understand (take more time) anything also I have limit of time for this Project. So I can not spend more time on it.
Also I found that we can use Flood fill algorithm. But I don't know how to use in my case.
NOTE: I don't want to divide original image (such like hair. nose, cap,...etc) because If I will do then there will be so many images in bundle so I need to handle it for both normal and retina device so this option is not helpful for me.
So please give me your valuable suggestion and also tell me which is best for me UIBezierPath or CGContext Reference? How can I fill color on specific portion of image? and/or can we fill color under the black border of area ? Because I am new at Quartz 2D Programming.
Use the Github library below. The post uses the flood fill algorithm : UIImageScanlineFloodfill
Objective C description : ObjFloodFill
If you want to look at detailed explanation of the algorithm : Recursion Explained with the Flood Fill Algorithm (and Zombies and Cats)
few of the other tutorials in other languages : Lode's Computer Graphics Tutorial : Flood Fill
Rather than attempting to flood fill an area of a raster-based image, a better approach (and much smaller amount of data) would be to create vector images. Once you have a vector image, you can stroke the outline to draw it uncolored, or you can fill the outline to draw it colored.
I recommend using CoreGraphics calls like CGContextStrokePath() and CGContextFillPath() to do the drawing. This will likely look better than using the flood fill because you will get nicely anti-aliased edges.
Apple has some good documentation on how to draw with Quartz 2D. Particularly the section on Paths is useful to what you're trying to do.
You can Clip the context based on image Alpha transparency
I have created a quick image with black color and alpha
Then using the below Code
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext(); // Get the context
CGContextSetFillColorWithColor(context, [UIColor blueColor].CGColor); // Set the fill color to be blue
// Flip the context so that the image is not flipped
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Clip the context by a mask created from the image
UIImage *image = [UIImage imageNamed:#"image.png"];
CGImageRef cgImage = image.CGImage;
CGContextClipToMask(context, rect, cgImage);
// Finally fill the context with the color and mask you set earlier
CGContextFillRect(context, rect);
}
The result
This is a quick hint of what you can do. However you need now to convert your image to add alpha transparent to the parts you need to remove
After a quick search I found these links
How can I change the color 'white' from a UIImage to transparent
How to make one colour transparent in UIImage
If You create your image as SVG vector base image, it will be very light (less than png, jpg) and really easy to manage via Quartz 2D using bezier paths. Bezier paths can be filled (white at starting). UIBezierPath have a method (containsPoint:) that help define if you click inside.

Capturing a preview image with AVCaptureStillImageOutput

Before stackoverflow members answer with "You shouldn't. It's a privacy violation" let me counter with why there is a legitimate need for this.
I have a scenario where a user can change the camera device by swiping left and right. In order to make this animation not look like absolute crap, I need to grab a freeze frame before making this animation.
The only sane answer I have seen is capturing the buffer of AVCaptureVideoDataOutput, which is fine, but now I can't let the user take the video/photo with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange which is a nightmare trying to get a CGImage from with CGBitmapContextCreate See How to convert a kCVPixelFormatType_420YpCbCr8BiPlanarFullRange buffer to UIImage in iOS
When capturing a still photo are there any serious quality considerations when using AVCaptureVideoDataOutput instead of AVCaptureStillImageOutput? Since the user will be taking both video and still photos (not just freeze-frame preview stills) Also, can some one "Explain it to me like I'm five" about the differences between kCVPixelFormatType_420YpCbCr8BiPlanarFullRange/kCVPixelFormatType_32BGRA besides one doesn't work on old hardware?
I don't think there is a way to directly capture a preview image using AVFoundation. You could however take a capture the preview layer by doing the following:
UIGraphicsBeginImageContext(previewView.frame.size);
[previewLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where previewView.layer is the
previewLayer is the AVCaptureVideoPreviewLayer added to the previewView. "image" is rendered from this layer and can be used for your animation.

How to scale down an image in iOS, anti-aliased but not soft?

I have tried the UIImage+Resize category that's popular, and with varying interpolation settings. I have tried scaling via CG methods, and CIFilters. However, I can never get an image downsized that does not either look slightly soft in focus, nor full of jagged artifacts. Is there another solution, or a third party library, which would let me get a very crisp image?
It must be possible on the iPhone, because for instance the Photos app will show a crisp image even when pinching to scale it down.
You said CG, but did not specify your approach.
Using drawing or bitmap context:
CGContextSetInterpolationQuality(gtx, kCGInterpolationHigh);
CGContextSetShouldAntialias(gtx, true); << default varies by context type
CGContextDrawImage(gtx, rect, image);
and make sure your views and their layers are not resizing the image again. I've had good results with this. It's possible other views are affecting your view or the context. If it does not look good, try it in isolation to sanity check whether or not something is distorting your view/image.
If you are drawing to a bitmap, then you create the bitmap with the target dimensions, then draw to that.
Ideally, you will maintain the aspect ratio.
Also note that this can be quite CPU intensive -- repeatedly drawing/scaling in HQ will cost a lot of time, so you may want to create a resized copy instead (using CGBitmapContext).
Here is the Routine that I wrote to do this. There is a bit of soft focus, though depending on how far you are scaling the original image its not too bad. I'm scaling programatic Screen Shot Images.
- (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
CGContextSetInterpolationQuality is what you are looking for.
You should try this category additions
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
When an image is scaled down, it is often a good idea to apply some sharpening.
The problem is, Core Image on iOS does not (yet) implement the sharpening filters (CISharpenLuminance, CIUnsharpMask), so you would have to roll your own. Or nag Apple till they implement these filters on iOS, too.
However, Sharpen luminance and Unsharp mask are fairly advanced filters, and in previous projects I have found that even a simple 3x3 kernel would produce clearly visible and satisfactory results.
Hence, if you feel like working at the pixel level, you could get the image data out of a graphics context, bit mask your way to R, G and B values and code graphics like it is 1999. It will be a bit like re-inventing the wheel, though.
Maybe there are some standard graphics libraries around that can do this, too (ImageMagick?)

Resources