I am creating an app to take an image and save the image to the camera roll but also adding a string of data to the metadata of the image at the same time and this string would mostly be a combination of user entered data in a couple of text fields. So I pick up an image using the camera of the iphone which is accessed by the UIImagePicker. Now I want to save this and edit the metadata before saving. I have looked up a solution but could not find a proper one for swift.
Related
in my app I'm taking a picture and save it in camera roll by:
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
Later in another screen, I want to display it to the user.
As I don't want to store the whole picture, is there any possibility to load an image from camera roll using the image name or an identifier? (And of course set the image name or saving the identifier when saving the picture)
Any help would be appreciated!
As per my knowledge it is not possible to load an image directly from camera roll using name/identifier.
For your requirement it is better to save your image with a name in document directory instead of saving it in camera roll, then load it with the same name where ever you want to use it.
For sample code refer #Dharmesh Kheni's answer in below link.
Get image name UIImagePickerController in Swift
I my app users can take and save photos, but before I save them on disk, I have to compress and downscale them. Is it possible to show automatically edited image in a standard preview screen right after user captured the image with UIImagePickerController? Or should I build my own camera with AVFoundation? If so, could anyone suggest some lightweight opensource camera for my purposes?
You're going to have to build your own solution with AVCaptureSession, which is not hard, since you more than likely will want to keep the original photo in a temp file, compress it, show it on a custom view with an image view in it and then ask the user if they want to save it or not.
Here's Apple's Docs but there are plenty tutorials on how to do this
I want to be able to convert a given image uploaded in the app and convert it to map tiles to be overplayed in a MapView. This could be a photo downloaded from the web or simply a photo taken from the device's camera.
I have so far come across GDAL2Tiles and MapTiler but I want to be able to do this in the app instead of preparing them beforehand. These are command line and desktop applications that will do the conversion, but this is all that I can find so far.
Is there a built-in feature for iOS that allows me to do this? If not, is there a third-party library that does, or is it just not possible?
Hi I was wondering if theres a way to extract ALAsset of an image taken from Camera but without saving it...
Ive come across various example that used writeImageToSavedPhotosAlbum and then fetched the ALAssest, but i dont deem it necessary to save the image in the camera roll, was just wondering if this could be done otherwise
No ALAssest exists until the image has been successfully saved to the image library. Until then you just have a UIImage that has come from the picker. There is no mandate that the image needs to be saved into the library and any decision about whether you want to save the image should be based on what the app tells the user and if the user would naturally expect to find the image in the library after taking / saving it in the app.
I am looking at building a simple PDF generator. The user will have several text fields and a couple image fields to enter information, and that will appear in specific spots on the PDF. I know how to add an image already in the app to the PDF, but not sure how to take an image that is chosen from Camera Roll on one view to be in the PDF on the next view. Suggestions?