iOS 5: adding a caption to a camera image - ios

I am building a simple app that allows the user to take photos that will be saved in the photo album. Because I expect users will be taking a lot of images of objects that look similar but not the same (for example, details of pieces of furniture in a house), I would like to add a small text caption on the image itself to describe the content of the image.
Not worrying about how the user will enter this text at the moment, can anyone give me some advice as to how I can add some text to an image taken with the UIImagePickerController?

Create a new graphics context.
Draw the image into the context.
Draw the text over it.
Create a new image from the context.
The code should be something like this:
UIGraphicsBeginImageContextWithOptions(sourceImage.size, YES, sourceImage.scale);
[sourceImage drawAtPoint:CGPointZero];
NSString *caption = #"Hello World";
UIFont *captionFont = [UIFont boldSystemFontOfSize:24.0];
[[UIColor whiteColor] setFill]; // or setStroke? I am not sure.
[caption drawAtPoint:CGPointMake(10.0f, 10.0f) withFont:captionFont];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Related

Merge two images in swift

I am showing users on a map view with custom markers. Each marker will contain the user's image, like in the image below:
There might be multiple users displayed as markers on the map. I get the data of users and their images through an API. The image which I receive from the API is just a rectangular image. But I have to show that image very similar to the above shown image. So I thought of two solution.
Get the marker image from the API itself that can be easily displayed as an image on the map.
I have the outer ellipse as an image. I can place a round image in that ellipse and create a new image. That can further be used as a marker. But for this I'll have to merge two photos. I am able to merge them. But the users image is always rectangle. I am not able to make it round.
Can any one help me with a better solution or just complete my solution?
The first option will be the easiest. If you re going with the second option then here's something:
-(UIImage *)makeRoundedImage:(UIImage *) image
radius: (float) radius {
CALayer *imageLayer = [CALayer layer];
imageLayer.backgroundColor = [UIColor whiteColor].CGColor;
imageLayer.frame = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.contents = (id) image.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = radius;
UIGraphicsBeginImageContext(image.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
This will create a round UIImage with a white background. Just use the resulting UIImage as the marker icon.
This won't answer you question completely, but to rounder the imageView use this(This code is in objective C and the "profilePic" is an example UIImageView):
profilePic.layer.cornerRadius = profilePic.frame.size.width / 2;

Merge multiple views in an uiview

In my app I've to take a picture and add the following information over the pic, these information are:
Weather forecast
Temperature
GPS location
Until now I obtained these information by using GPS and a web service for weather forecast (open weather map). I made so:
I take the picture with the standard UIImagePicker
I put a button on my interface to show the picture to the user
When the user press the button the app open a new ViewController in which I show the picture just take and I added 2 UILabel (one for temperature and one for the location) and a UIImageView (to show an icon about the weather forecast). The UILabels and the UIImageView I draw directly on the StoryBoard.
Now I need to merge the picture with the 2 UILabel and with the UIImageView, there's a way to merge them in a single UIImageView?
I've to do that to save the picture with the weather forecast and location
UPDATE
I create a button to save the picture with the labels and the imageview and the code I wrote it's this:
- (IBAction)buttonSavePicture:(UIButton *)sender {
[self.imageView addSubview:self.labelPlace];
[self.imageView addSubview:self.labelTemperature];
[self.imageView addSubview:self.imageViewWeather];
UIGraphicsBeginImageContext(self.imageView.bounds.size);
[self.imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentationDirectory, NSUserDomainMask, YES);
NSString *filePath = [[paths objectAtIndex:0] stringByAppendingPathComponent:self.filename];
[UIImageJPEGRepresentation(finalImage, 1) writeToFile:filePath atomically:YES];
}
But when I go to see in the Documents directory if I saved correctly the picture I don't find it.
Yes, you can easily do it by capturing them. Follow steps.
Create a small parent view in storyboard put all controls you want to capture together inside. Create an outlet say captureView.
Call the following function when you need.
-(void)capture{
UIGraphicsBeginImageContext(self.captureView.bounds.size);
[self.captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//FINAL OUTPUT
self.imageView.image=capturedImage;
}
Cheers.
If you using iOS7 have a look at the snapshotViewAfterScreenUpdates: and the drawViewHierarchyInRect:afterScreenUpdates: this one is used to include or capture subviews like labels etc. this will return a single UIView of everything on screen, then save that as a UIImage.
CGSize imgSize = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height);
UIGraphicsBeginImageContextWithOptions(imgSize, NO , 0.0f);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *weatherImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(weatherImage, nil, nil, nil; //save to saved image album
If all went right you should have your "screenshot" in the photo album

Is it possible to use the build in iOS crop tool in my app?

In the photo albums app there's a build in edit -> cropping tool. Is it possible to use that tool in an app instead of writing it on my own? Is it a part of the framework?
No, there is no built-in crop tool. However, it would not be that hard to write such a tool.
You'd need to create a control that let the user drag around an image in a scroll view, and collect the coordinates.
Then you'd create a graphics context and use the UIImage method drawInRect: to draw the image into a rect that's larger than the graphics context. The result would be to draw a cropped portion of the image into the context. Then you'd extract an image from the graphics context and discard the graphics context.
No that is not part of SDK, but you can easily crop images in iOS.
- (UIImage *)resizeImage:(UIImage *)image width:(float)w height:(float)h {
UIImage *croppedImage = image;
CGSize size = CGSizeMake(w, h);
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
[image drawInRect:rect];
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
I created a crop tool that might fit your need. It's not based on a scroll view, but rather letting the user choose a frame around their image.
https://github.com/nicholjs/BFCropInterface

How do I "combine" two UIImageViews into one image? Like Photoshop Layer merging?

I have a drawing app where you have one UIImageView that serves as the "drawing layer." You have another UIImageView beneath it that is the "image layer," containing the image you are drawing on. I like having this separation. However, I want the user to be able to "save and email" the drawing they have made on top of the image as one unified image. How do I do this?
Your UIImageView instances must be part of a UIView hierachy so all you need to do is paint that top containing UIView into a context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
[container.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
or if that gives you trouble successively paint the two images into a context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
[image1.layer renderInContext:UIGraphicsGetCurrentContext()];
[image2.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
from there you can write the data where you choose
NSData *data = UIImagePNGRepresentation(fimage);
Pass that data into the MailComposer setup.

Draw text and add to a UIImage iOS 5/6

I have a textbox, where i want the written text to be added to a UIImage.
How can i draw NSString to a UIImage?
I´ve searched, and found lots of examples, but non of them works. Xcode just gives me lots of errors.
Simply put, i want to draw a NSString to a UIimage. The UIImage should be the same size as a predefined UIImageView, and be placed in center.
Any ideas?
i think solution is here..i have used this method in one of my application
here lbltext is the object of my label. this method creates new image with given text & return the object of new image with text.
-(UIImage *) drawText:(NSString*) text inImage:(UIImage*)image atPoint:(CGPoint)point
{
UIFont *font = lblText.font;
UIGraphicsBeginImageContext(iv.frame.size);
[image drawInRect:CGRectMake(0,0,iv.frame.size.width,iv.frame.size.height)];
CGRect rect = CGRectMake(point.x,point.y,lblText.frame.size.width, lblText.frame.size.height);
[lblText.textColor set];
[text drawInRect:CGRectIntegral(rect) withFont:font];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
in your below comment you have passed iv.center as a point.. try manualy like this
CGPoint point;
point.x=30; // dynamicaly just for example lblText.frame.origin.x;
point.y=40; //lblText.frame.origin.y;
you have to call above method like this...
UIImage *img = [self drawText:#"Test String" inImage:originalImage atPoint:point];
here,img is another instace of UIImage for storing new image with text...after calling this method use img imager for your further use..
for example...
[newimageviewObj setImage:img];
i hope this will help you
UIImage is not a subview of UIView, so you cant add a subview to it. Also NSString is not a subview of UIView. If you want to show things on the screen, they should inherit from UIView.
So try this:
Create a UIImageView - set its image property to be your UIImage instance.
Create a UILabel - set its text property to your NSString.
Add the UILabel as a subview of your UIImageView.
and finally add your UIImageView to your current view controllers view.
Emm, here is some thoughts.
I think that one simple way is like this :
Put aUIImageView on aView;
Add aUITextView on aView;
Get ScreenShot from aView;
This may works fine.
Also, this may come with a problem that screenshot may be not clear.
Then, After step1 and step2, we may get new image by UIGraphics.
(With here, we already know position where text should be on image and this should be ok.)
PS: Some image may not have some attribution like CGImage and CGImage is needed for this. So, transform it.

Resources