We're experiencing a strange problem with a UIImagePickerController. In our application users are able to fill out a series of forms and also attach images and videos within these forms.
We allow users to add multiple photos / videos either from the camera roll or to be captured at the time of filling the form out.
We're using the UIImagePickerController to do this. The problem occurs when 1 or 2 images / videos are taken with the camera.
Once 1 or 2 images / videos are captured when the camera screen is re-entered for a third time the image is static and doesn't update. The view is stuck at the last frame of whatever was captured last.
If the capture button is pressed then the image / video suddenly updates and has captured what the camera was pointing at. From then on the picker is good for another go behaving normally. Additionally selecting a picture / video from the camera roll appears to make everything behave again for another picture / video. Finally when the screen isn't responding and the user has selected to take a picture the view will shrink to a small rectangle within the view. The controller is being setup as follows:
private void SourceChosen(EventHandler<UIImagePickerMediaPickedEventArgs> captureEvent, int buttonIndex, string[] mediaTypes)
{
var picker = ConfigurePicker(mediaTypes, captureEvent);
if (CameraAvailable && buttonIndex == 0)
{
picker.SourceType = UIImagePickerControllerSourceType.Camera;
picker.CameraDevice = UIImagePickerControllerCameraDevice.Rear;
this.NavigationController.PresentViewController(picker, true, () => { });
}
if ((!CameraAvailable && buttonIndex == 0) || (CameraAvailable && buttonIndex == 1))
{
picker.SourceType = UIImagePickerControllerSourceType.PhotoLibrary;
this.NavigationController.PresentViewController(picker, false, () => { });
}
}
private UIImagePickerController ConfigurePicker(string[] mediaTypes, EventHandler<UIImagePickerMediaPickedEventArgs> captureEvent)
{
var mediaPicker = new UIImagePickerController();
mediaPicker.FinishedPickingMedia += captureEvent;
mediaPicker.Canceled += (sender, args) => mediaPicker.DismissViewController(true, () => { });
mediaPicker.SetBarDefaults();
mediaPicker.MediaTypes = mediaTypes;
return mediaPicker;
}
An example of a captureEvent is as follows:
void PhotoChosen(object sender, UIImagePickerMediaPickedEventArgs e)
{
UIImage item = e.OriginalImage;
string fileName = string.Format("{0}.{1}", Guid.NewGuid(), "png");
string path = Path.Combine(IosConstants.UserPersonalFolder, fileName);
NSData imageData = item.AsPNG();
CopyData(imageData, path, fileName, ViewModel.Images, ((UIImagePickerController)sender));
}
private void CopyData(NSData imageData, string path, string fileName, List<AssociatedItem> collectionToAddTo, UIImagePickerController picker)
{
byte[] imageBytes = new byte[imageData.Length];
System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, imageBytes, 0, Convert.ToInt32(imageData.Length));
File.WriteAllBytes(path, imageBytes);
AssociatedItem item = new AssociatedItem
{
StorageKey = fileName
};
collectionToAddTo.Add(item);
picker.DismissViewController(true, ReloadTables);
}
At the moment as you can see we're not holding a reference to the picker but we have tried variations of this code where we store a reference to the picker and dispose it after the CopyData method, we've added picker.Release(); after copydata and before the dispose (results in subsequent pickers crashing the application when displayed) and pretty much every other variation on the theme.
Does anyone have any idea why this might be occurring and how to fix it? It was my assumption that we might be running low on memory but neither disposing of it each time / only ever creating one instance and changing its mode from pictures to videos has any affect and we always see the same behaviour.
EDIT
Thanks to Kento and the below answer what we needed to get it all working as intended was something along the lines of:
public class PickerDelegate : UIImagePickerControllerDelegate
{
private readonly Action<UIImagePickerController, NSDictionary> _captureEvent;
public PickerDelegate(Action<UIImagePickerController, NSDictionary> captureEvent)
{
_captureEvent = captureEvent;
}
public override void FinishedPickingMedia(UIImagePickerController picker, NSDictionary info)
{
_captureEvent(picker, info);
}
}
Then to get an image
void PhotoChosen(UIImagePickerController picker, NSDictionary info)
{
UIImage item = (UIImage)info.ObjectForKey(UIImagePickerController.OriginalImage);
string fileName = string.Format("{0}.{1}", Guid.NewGuid(), "png");
string path = Path.Combine(IosConstants.UserPersonalFolder, fileName);
NSData imageData = item.AsPNG();
CopyData(imageData, path, fileName, ViewModel.Images, picker);
}
Or to get a video
void VideoChosen(UIImagePickerController picker, NSDictionary info)
{
var videoURL = (NSUrl)info.ObjectForKey(UIImagePickerController.MediaURL);
NSData videoData = NSData.FromUrl(videoURL);
string fileName = string.Format("{0}.{1}", Guid.NewGuid(), "mov");
string path = Path.Combine(IosConstants.UserPersonalFolder, fileName);
CopyData(videoData, path, fileName, ViewModel.Videos, picker);
}
I had this same problem.
The post here is not marked as the answer but it did solve it for me: https://stackoverflow.com/a/20035698/2514318
I'm guessing this is a bug w/ MonoTouch when using the FinishedPickingMedia event. I have read that there are leaks with using UIImagePickerController (regardless of using obj c or Mono) so I prefer to keep the instance around and re-use it. If you do re-create it each time, I would recommend disposing the previous instance.
Can anyone from Xamarin weigh in on if this is a bug or not?
This post helped me a lot, so I decided to make a very simples sample and post on github for anyone that may need it: https://github.com/GiusepeCasagrande/XamarinSimpleCameraSample
Related
I want to use UIActivityViewController to share files from my iOS app. The main question for me is how do I handle different file types.
What I'v got so far:
Images
public void OpenInExternalApp(string filepath)
{
if (!File.Exists(filepath))
return;
UIImage uiImage = UIImage.FromFile(filepath);
// Define the content to share
var activityItems = new NSObject[] { uiImage };
UIActivity[] applicationActivities = null;
var activityController = new UIActivityViewController(activityItems, applicationActivities);
if (UIDevice.CurrentDevice.UserInterfaceIdiom == UIUserInterfaceIdiom.Phone)
{
// Phone
UIApplication.SharedApplication.KeyWindow.RootViewController.PresentViewController(activityController, true, null);
}
else
{
// Tablet
var popup = new UIPopoverController(activityController);
UIView view = UIApplication.SharedApplication.KeyWindow.RootViewController.View;
CGRect rect = new CGRect(view.Frame.Width/2, view.Frame.Height, 50, 50);
popup.PresentFromRect(rect, view, UIPopoverArrowDirection.Any, true);
}
}
Don't know if from the memory management aspect it is a good idea to load the image at once. What will happen if the image is too big for holding it completely in RAM? See here for example.
Strings
var activityItems = new NSObject[] { UIActivity.FromObject(new NSString(text)) };
Only text.
NSUrl
NSUrl url = NSUrl.CreateFileUrl(filepath, false, null);
Here in most cases the same app appear. But for example the PDF reader doesn't appear for a PDF file. The preview in mail on the other side shows Adobe Acrobat.
Everything
var activityItems = new NSObject[] { NSData.FromFile(filepath) };
The last approach has the disadvantage that not all apps are displayed, which for example could open a PDF file. Also this applies.
I want to use all types of files. I don't think a subclass of UIActivity would help here. Perhaps a sublcass of UIActivityItemProvider?
Side note: You can also post your solutions in Objective C/Swift.
I tried to implement UIActivityItemProvider, but here again not all apps where shown for the corresponding filetype. E.g. for a docx-document Word was not shown.
Now I switched to UIDocumentInteractionController and now there are many apps available.
UIDocumentInteractionController documentController = new UIDocumentInteractionController();
documentController.Url = new NSUrl(filepath, false);
string fileExtension = Path.GetExtension(filepath).Substring(1);
string uti = UTType.CreatePreferredIdentifier(UTType.TagClassFilenameExtension.ToString(), fileExtension, null);
documentController.Uti = uti;
UIView presentingView = UIApplication.SharedApplication.KeyWindow.RootViewController.View;
documentController.PresentOpenInMenu(CGRect.Empty, presentingView, true);
Imho there are too many apps, because the file type xml should not be really be supported by a PDF reader, but it is. Nevertheless, it seems to work now thanks to this post:
In general if you’re sharing an image or url, you might want to use a UIActivityViewController. If you’re sharing a document, you might want to use a UIDocumentInteractionController.
I am saving images to a custom album after either select or camera completion. Obviously, after camera completion, there is only one image, but when a user selects images in the gallery picker, in the completion handler, when I save that image to the custom album, a duplicate is ALWAYS created. Both in the gallery as well as the root photoAlbum. Everywhere, it seems. I cannot reference the ID to see if it was created before, because the ID is being newly created with the placeholder.
Is there a way to get the base image reference ID so that I can associate EVERY image to the original? As I understand it, IOS (I hate ios btw), saves only one actual image and the rest are just pointers to the original image object. If that is the case, I would expect there is a way to get a solid reference to the original Image and from there, I can easily manage assets created from that base image.
public static func addNewImage(_ image:UIImage, toAlbum albumName:String,imageID:String?,onSuccess success:#escaping(String)->Void, onFailure failure:#escaping(Error?)->Void) {
guard let album = self.getAlbum(withName: albumName) else {
failure(SDPhotosHelper.albumNotFoundError)
return
}
var localIdentifier = String();
if(imageID != nil){
if(self.hasImageInAlbum(withIdentifier: imageID!, fromAlbum: albumName)){
failure(SDPhotosHelper.albumNotFoundError)
return;
}
}
PHPhotoLibrary.shared().performChanges({
let albumChangeRequest = PHAssetCollectionChangeRequest(for: album)
let assetCreationRequest = PHAssetChangeRequest.creationRequestForAsset(from: image)
//assetCreationRequest.location = "";
let placeHolder = assetCreationRequest.placeholderForCreatedAsset
albumChangeRequest?.addAssets([placeHolder!] as NSArray)
if placeHolder != nil {
localIdentifier = (placeHolder?.localIdentifier)!
}
}) { (didSucceed, error) in
OperationQueue.main.addOperation({
didSucceed ? success(localIdentifier) : failure(error)
})
}
}
No one bothered to assist with this. Luckily, I was able to find the solution. For any who come across this or the similar one that was also sitting around with the crickets: Choosing a picture causes resave to camera roll here is a solution.
The code I have is to CREATE A NEW ASSET. It is useful only for the saving the image to your custom album after the user has taken a picture with the camera. It is for brand new assets.
However, for existing assets, you do not want to create a new asset. Instead, you want to add the existing asset to the custom album. To do this, you need a different method. Here is the code I created and it seems to be working. Keep in mind that you will have to get the asset ID FIRST, so that you can send it to your method and access the existing asset.
So, in your imagePickerController, you have to determine whether the user chose an existing image or whether the method is being called from a new camera action.
let pickerSource = picker.sourceType;
switch(pickerSource){
case .savedPhotosAlbum, .photoLibrary:
if(let url = info[UIIMagePickerControllerReferenceURL] as? NSURL{
let refURLString = refURL?.absoluteString;
/* value for refURLString looks something like assets-library://asset/asset.JPG?id=82A6E75C-EA55-4C3A-A988-4BF8C7F3F8F5&ext=JPG */
let refID = {function here to extract the id query param from the url string}
/*above gets you the asset ID, you can get the asset directly, but it is only
available in ios 11+.
*/
MYPHOTOHELPERCLASS.transferImage(toAlbum: "myalbumname", withID: refID!, ...)
}
break;
case .camera:
...
break;
}
Now, in your photohelper class (or in any function anywhere, whatever), to EDIT the asset instead of create a new one, this is what I have. I am assuming the changeRequest variable can be ommitted. I was just playing around until I got this right. Going through the completely ridiculous apple docs I was able to at least notice that there were other methods to play with. I found that the NSFastEnumeration parameter can be an NSArray of PHAssets, and not just placeholder PHObjectPlaceholder objects.
public static func transferImage(toAlbum albumName:String, withID imageID:String, onSuccess success:#escaping(String)->Void, onFailure failure:#escaping(Error?)->Void){
guard let album = self.getAlbum(withName: albumName) else{
... failure here, albumNotFoundError
return;
}
if(self.hasImageInAlbum(withIdentifier: imageID, fromAlbum: albunName)){
... failure here, image already exists in the album, do not make another
return;
}
let theAsset = self.getExistingAsset(withLocalIdentifier: imageID);
if(theAsset == nil){
... failure, no asset for asset id
return;
}
PHPhotoLibrary.shared().performChanges({
let albumChangeRequest = PHAssetCollectionChangeRequest(for: album);
let changeRequest = PHAssetChangeRequest.init(for: theAsset!);
let enumeration:NSArray = [theAsset!];
let cnt = album.estimatedAssetCount;
if(cnt == 0){
albumChangeRequest?.addAssets(enumeration);
}else{
albumChangeRequest?.inserAssets(enumeration, at: [0]);
}
}){didSucceed, error) in
OperationQueue.main.addOperation({
didSucceed ? success(imageID) : failure(error);
})
}
}
So, it is pretty much the same, except instead of creating an Asset Creation Request and generating a placeholder for the created asset, you instead just use the existing asset ID to fetch an existing asset and add the existing asset to the addasset/insertasset NSArray parameter instead of a newly created asset
I want to use UIActivityViewController to share files from my iOS app. The main question for me is how do I handle different file types.
What I'v got so far:
Images
public void OpenInExternalApp(string filepath)
{
if (!File.Exists(filepath))
return;
UIImage uiImage = UIImage.FromFile(filepath);
// Define the content to share
var activityItems = new NSObject[] { uiImage };
UIActivity[] applicationActivities = null;
var activityController = new UIActivityViewController(activityItems, applicationActivities);
if (UIDevice.CurrentDevice.UserInterfaceIdiom == UIUserInterfaceIdiom.Phone)
{
// Phone
UIApplication.SharedApplication.KeyWindow.RootViewController.PresentViewController(activityController, true, null);
}
else
{
// Tablet
var popup = new UIPopoverController(activityController);
UIView view = UIApplication.SharedApplication.KeyWindow.RootViewController.View;
CGRect rect = new CGRect(view.Frame.Width/2, view.Frame.Height, 50, 50);
popup.PresentFromRect(rect, view, UIPopoverArrowDirection.Any, true);
}
}
Don't know if from the memory management aspect it is a good idea to load the image at once. What will happen if the image is too big for holding it completely in RAM? See here for example.
Strings
var activityItems = new NSObject[] { UIActivity.FromObject(new NSString(text)) };
Only text.
NSUrl
NSUrl url = NSUrl.CreateFileUrl(filepath, false, null);
Here in most cases the same app appear. But for example the PDF reader doesn't appear for a PDF file. The preview in mail on the other side shows Adobe Acrobat.
Everything
var activityItems = new NSObject[] { NSData.FromFile(filepath) };
The last approach has the disadvantage that not all apps are displayed, which for example could open a PDF file. Also this applies.
I want to use all types of files. I don't think a subclass of UIActivity would help here. Perhaps a sublcass of UIActivityItemProvider?
Side note: You can also post your solutions in Objective C/Swift.
I tried to implement UIActivityItemProvider, but here again not all apps where shown for the corresponding filetype. E.g. for a docx-document Word was not shown.
Now I switched to UIDocumentInteractionController and now there are many apps available.
UIDocumentInteractionController documentController = new UIDocumentInteractionController();
documentController.Url = new NSUrl(filepath, false);
string fileExtension = Path.GetExtension(filepath).Substring(1);
string uti = UTType.CreatePreferredIdentifier(UTType.TagClassFilenameExtension.ToString(), fileExtension, null);
documentController.Uti = uti;
UIView presentingView = UIApplication.SharedApplication.KeyWindow.RootViewController.View;
documentController.PresentOpenInMenu(CGRect.Empty, presentingView, true);
Imho there are too many apps, because the file type xml should not be really be supported by a PDF reader, but it is. Nevertheless, it seems to work now thanks to this post:
In general if you’re sharing an image or url, you might want to use a UIActivityViewController. If you’re sharing a document, you might want to use a UIDocumentInteractionController.
The documentation is not really clear to me. So far I reckon I need to set up a CGPDFOperatorTable and then create a CGPDFContentStreamCreateWithPage and CGPDFScannerCreate per PDF page.
The documentation refers to setting up Callbacks, but it's unclear to me how. How to actually obtain the content from a page?
This is my code so far.
let pdfURL = NSBundle.mainBundle().URLForResource("titleofdocument", withExtension: "pdf")
// Create pdf document
let pdfDoc = CGPDFDocumentCreateWithURL(pdfURL)
// Nr of pages in this PF
let numberOfPages = CGPDFDocumentGetNumberOfPages(pdfDoc) as Int
if numberOfPages <= 0 {
// The number of pages is zero
return
}
let myTable = CGPDFOperatorTableCreate()
// lets go through every page
for pageNr in 1...numberOfPages {
let thisPage = CGPDFDocumentGetPage(pdfDoc, pageNr)
let myContentStream = CGPDFContentStreamCreateWithPage(thisPage)
let myScanner = CGPDFScannerCreate(myContentStream, myTable, nil)
CGPDFScannerScan(myScanner)
// Search for Content here?
// ??
CGPDFScannerRelease(myScanner)
CGPDFContentStreamRelease(myContentStream)
}
// Release Table
CGPDFOperatorTableRelease(myTable)
It's a similar question to: PDF Parsing with SWIFT but has no answers yet.
Here is an example of the callbacks implemented in Swift:
let operatorTableRef = CGPDFOperatorTableCreate()
CGPDFOperatorTableSetCallback(operatorTableRef, "BT") { (scanner, info) in
print("Begin text object")
}
CGPDFOperatorTableSetCallback(operatorTableRef, "ET") { (scanner, info) in
print("End text object")
}
CGPDFOperatorTableSetCallback(operatorTableRef, "Tf") { (scanner, info) in
print("Select font")
}
CGPDFOperatorTableSetCallback(operatorTableRef, "Tj") { (scanner, info) in
print("Show text")
}
CGPDFOperatorTableSetCallback(operatorTableRef, "TJ") { (scanner, info) in
print("Show text, allowing individual glyph positioning")
}
let numPages = CGPDFDocumentGetNumberOfPages(pdfDocument)
for pageNum in 1...numPages {
let page = CGPDFDocumentGetPage(pdfDocument, pageNum)
let stream = CGPDFContentStreamCreateWithPage(page)
let scanner = CGPDFScannerCreate(stream, operatorTableRef, nil)
CGPDFScannerScan(scanner)
CGPDFScannerRelease(scanner)
CGPDFContentStreamRelease(stream)
}
You've actually specified exactly how to do it, all you need to do is put it together and try until it works.
First of all, you need to setup a a table with callbacks as you state yourself in the beginning of your question (all code in Objective C, NOT Swift):
CGPDFOperatorTableRef operatorTable = CGPDFOperatorTableCreate();
CGPDFOperatorTableSetCallback(operatorTable, "q", &op_q);
CGPDFOperatorTableSetCallback(operatorTable, "Q", &op_Q);
This table contains a list of the PDF operators you want to get called for and associates a callback with them. Those callbacks are simply functions you define elsewhere:
static void op_q(CGPDFScannerRef s, void *info) {
// Do whatever you have to do in here
// info is whatever you passed to CGPDFScannerCreate
}
static void op_Q(CGPDFScannerRef s, void *info) {
// Do whatever you have to do in here
// info is whatever you passed to CGPDFScannerCreate
}
And then you create the scanner and get it going, while passing it the information you just defined.
// Passing "self" is just an example, you can pass whatever you want and it will be provided to your callback whenever it is called by the scanner.
CGPDFScannerRef contentStreamScanner = CGPDFScannerCreate(contentStream, operatorTable, self);
CGPDFScannerScan(contentStreamScanner);
If you want to see a complete example with sourcecode on how to find and process images, check this website.
To understand why a parser works this way, you need to read the PDF specification a bit better. A PDF file contains something close to printing instructions. Such as "move to this coordinate, print this character, move there, change the color, print the character number 23 from the font #23", etc.
The parser gives you callbacks for each instructions, with the possibility to retrieve the instruction parameters. That's all.
So, in order to get the content from a file, you need to rebuild its state manually. Which means, recompute the frames for all characters, and try to reverse-engineer the page layout. This is clearly not an easy task, and that's why people have created libraries to do so.
You may want to have a look at PDFKitten , or PDFParser which is a Swift port with some improvement that i did.
Titanium / for an iOS App:
How can I manage to take a photo, and then use this one later in a new function to for example show the photo, and put a slightly larger duplicate of it with a transparency on top of itself?
Note that I tried to edit the answer from Mitul Bhalia with the following, but the edit got knocked back. So here's how you do it:
After taking the image, you can store it as a variable in the global object, Alloy.Globals. You can then access this else where or later on in your app.
For example:
takePhotoButton.addEventListener('click', function(){
Titanium.Media.showCamera({
success:function(event) {
if(event.mediaType === Ti.Media.MEDIA_TYPE_PHOTO) {
// Store the file in a variable
var image = event.media;
// Store the image in the global object
Alloy.Globals.temporaryImage = image;
} else {
alert("got the wrong type back ="+event.mediaType);
}
},
...
And somewhere else in your app, after the image has been stored, for example:
var anImage = Ti.UI.createImageView({ image: Alloy.Globals.temporaryImage })
Also note that extensive use of the global object can cause memory issues, so try not to overdo it.
If you want to save photo then you can use Alloy.Globals to save data globally so you can use it later.
Alloy.Globals.photo = blob object;
Here is how I am currently solving my problem:
takePhotoButton.addEventListener('click', function(){
Titanium.Media.showCamera({
success:function(event) {
if(event.mediaType === Ti.Media.MEDIA_TYPE_PHOTO) {
// Store the file in a variable
var image = event.media;
var filename = 'myPhoto.jpg';
takenPhoto = Titanium.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory, filename);
takenPhoto.write(image);
} else {
alert("got the wrong type back ="+event.mediaType);
}
},
...
and then later I can use takenPhoto to show the picture I have taken with the camera.
But I do not really know if this is the best way and if it is even correct, as I do not use 'var' to initiate takenPhoto. But if I use 'var' I cannot use takenPhoto outside of the function.