iphone image merging - ios

i want to merge multiple images ina single image,by using a button click>my problem is ,,if the user touch a image in tableview ,the coresponding image have to merge with the main image in the next page..how can we done this?
this is the main image in the page!this the the uppercloth
i have to fix the upper cloth in the caricature

Use UIGraphicsBeginImageContext, then call drawAtPoint method of the images, then use UIGraphicsGetImageFromCurrentImageContext to get the merged image. Something like this(Not checked, just wrote from memory..Syntax errors possible..Correct yourselves..)
UIGraphicsBeginImageContext(yourFirstImage.size);
[yourFirstImage drawAtPoint:CGPointMake(0,0)];
[yourSecondImage drawAtPoint:CGPointMake(0,0)];
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Related

iOS Drawing image with objects (arrows) and saving to disk

I am creating a iOS drawing application that allows the user to place arrows on the page and position/stretch/rotate them. When the user saves the image, I must modify the background image with the png arrows at the exact coordinate they were dragged to.
The 3 drag-able parts will be UIImage's with CapInsets to keep the arrow looking normal when stretched. The canvas UIImageView will have the same aspect as the actual image ensuring no black space is visible.
My questions is this. After a drag-able part has been stretched, how do I save the results of the UIImageView (the smaller drag-able parts) to disk/memory for later modifying of the background image (the background UIImageView canvas). It is important to ensure what image I used to modify the canvas looks exactly the same as the UIImageView drag-able part that represented it. That means that it must retain exactly how the image is displayed, including the CapInsets.
I am ignoring rotation of the drag-able parts for now because I suspect that would be the easy part.
Try using this code to get a layer screenshot from a desired view:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}

iOS - How to properly scale UIButton with varying image sizes (like Facebook News Feed)?

I am attempting to replicate Facebook's News Feed for an app, and will have images that should be displayed. The images will be a variety of sizes, so how should I properly scale them to fit the cell without looking weird?
The cell contains a UIButton which then has the image set to it. I am using a button as I want to open the full size image when it is pressed. If it is better to change to a UIImageView with a tap gesture I can do that too, just thought the button was simpler.
Ideas?
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Use this to scale the image based on size.
I assume that you are going to be using a UITableView in order to show these images.
This is going to create problems when you scroll the table view because the images get resized as the rows are loaded and it takes too much processing time, causing the scrolling to not be smooth.
There are two ways around this:
Create images that are appropriately sized when you create the full-sized images, and save them to disk that way. If these are static photos, then you would create the thumbnails in an image editing program (i.e. Photoshop) and include them along with the full resolution photos. If the photos are downloaded/generated dynamically, then you would need to create the smaller version when you save the full-size image. This is actually more complicated than it sounds, and I can provide some code if this is the route that you need to go. The key point is that you will not need to resize the images while the tableview rows are being loaded, saving a lot of time.
Resize the images in a background thread and add them to the table view rows as the processing finishes. There are numerous examples around of doing this, including the following example project from Apple: https://developer.apple.com/library/ios/#samplecode/LazyTableImages/Introduction/Intro.html
The best possible performance would be to do both, but from my experience, only one or the other is really needed.
Why not just handle all the image sizing through the built in functions of a UIImageView and then put a transparent button over them that is the same size (frame) as them... (*To make a UIButton transparent just the background color to clear and the button type to Custom (I think the default is rounded rect or something)
On a different note if you wanted to just detect clicks on a UIImageView without putting a transparent UIButton over the images just use touchesBegan and detect if the touch was on a UIImage in your array.
You can use [CGAffineTransformMakeScale][1] in Quartz 2D framework to scale view very easily.
#define X_SCALE_FACTOR 2.5
#define Y_SCALE_FACTOR 2.5
UIImageView *imageView = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"myimage.png"]];
imageView.transform = CGAffineTransformMakeScale(X_SCALE_FACTOR, Y_SCALE_FACTOR);

How to replace a region of an image with another image in ios

I want to process a subregion of a UIImage in an iOS app. Following this question, I now have a routine to extract the region in question as a UIImage that I can now manipulate. Is there a similarly convenient method for placing the region back into the original image? The alternative I'm considering is a bytewise copy, which seems extremely low-level to me.
You could draw the two images on top of each other, and then combine them to one image.
Assuming you have the original image and the modified part:
UIGraphicsBeginImageContext(originalImage.size);
[originalImage drawAtPoint:CGPointMake(0, 0)];
[modifiedPart drawAtPoint:/* Upper left corner of the modified part */];
UIImage *combined = UIGraphicsGetImageFromCurrentImageContext();
Edit:
Forgot this line:
UIGraphicsEndImageContext();

Image Drawing on UIView

I'm trying to create an application where I can draw a lot of pictures at a specific point (determined for each image) on one view. I have a coordinates where I need draw a picture, width and height of it
For example:
I have 2 billion jpeg's images. for each images I have a specific origin point and size.
In 1 second I need draw on view 20-50 images in specific point.
I have already tryid solve that in the next way:
UIGraphicsBeginImageContextWithOptions(self.previewScreen.bounds.size, YES, 0);
[self.previewScreen.image drawAtPoint:CGPointMake(0, 0)];
[image drawAtPoint:CGPointMake(nRect.left, nRect.top)];
UIImage *imagew = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.previewScreen setImage:imagew];
but in this solution I have a very big latency with displaying images and big CPU usage
WBR
Maxim Tartachnik
So I guess your question is, how to make it faster?
Why draw the images using ImageContext? You could just add UIImageViews containing your images to your main view and position them like you need it.

How to take a screenshot using code on iOS?

How to take a screenshot programmatically?
You can use UIGraphicsBeginImageContext for this purpose.
For example :
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage*theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData*theImageData = UIImageJPEGRepresentation(theImage, 1.0 ); //you can use PNG too
[theImageData writeToFile:#"example.jpeg" atomically:YES];
Here
1. First i take the image context, i've given it as myView which is a webview, you can give whatever you wish for. This takes the image of webview that can be seen on the screen.
2 Using UIGraphicsGetImageFromCurrentImageContext() i convert the screenshot of my webview into an image.
3. By using UIGraphicsEndImageContext() i ask it end the context.
4. I'm saving the image into an NSData because i had to mail the screenshot. And keep it in NSData seemed a good option if it is used to send or save.
EDIT: To add it to the camera roll you need to write:
UIImageWriteToSavedPhotosAlbum(theImage,nil,NULL,NULL); after UIGraphicsEndImageContext();
Have a look at this answer.It also takes care of retina display.
Actually to explain the process,
Choose a image context size (probably the layer size for which you need screen shot)
Render the layer which you want to take screenshot in the created context
Obtain the image from the context and you are done!

Resources