Is it possible to perform a screenshot of the current visible zone of the webview in Safari from a Share Extension? I could use windows, but UIApplication isn't supported on extensions so I can't access to that window.
You can't since UIApplication can't be reached from an extension. You cannot get the first UIWindow, which is the Safari layer, so you have to play with the Javascript preprocessing file that the extensions have. So just create a Javascript file that, when sent to Safari, generates a base64 string with the current visible zone image data. Take that string through the kUTTypePropertyList identifier in your extension. Since that should be NSData, generate the UIImage from there, by using +imageWithData. That is what you're looking for, without having to load the page again, preventing a second load and a bad image if the webpage requires of a login.
As far as I know, you can't unless you invoke the API you need dynamically, and even so you might run into context permission issues and app store approval issues.
An alternative might be passing the current Safari URL to your extension, load it using a hidden UIWebView and render this view into an UIImage but you will loose the current visible zone information...
Edit: So the below works in the Simulator but does not work on the device. I'm presently looking for a solution as well.
You can't get just the visible area of Safari, but you can get a screenshot with a little ingenuity. The following method captures a screenshot from a ShareViewController.
func captureScreen() -> UIImage
{
// Get the "screenshot" view.
let view = UIScreen.mainScreen().snapshotViewAfterScreenUpdates(false)
// Add the screenshot view as a subview of the ShareViewController's view.
self.view.addSubview(view);
// Now screenshot *this* view.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, false, 0);
self.view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Finally, remove the subview.
view.removeFromSuperview()
return image
}
This is the approved way to capture the screenshot of a webpage in a share extension:
for (NSExtensionItem *item in self.extensionContext.inputItems) {
for (NSItemProvider *itemProvider in item.attachments) {
[itemProvider loadPreviewImageWithOptions:#{NSItemProviderPreferredImageSizeKey: [NSValue valueWithCGSize:CGSizeMake(60.0f, 60.0f)]} completionHandler:^(UIImage * item, NSError * _Null_unspecified error) {
// Set the size to that desired, however,
// Note that the image 'item' returns will not necessarily by the size that you requested, so code should handle that case.
// Use the UIImage however you wish here.
}];
}
}
Related
My App has one ViewController, one UIWebView and 4 UIImageView
The control flow in my app:
-> Capture a Image from camera and store into UIImage variable, this happens in didFinishPickingMediaWithInfo function. Do some processing on the Image.
-> From didFinishPickingMediaWithInfo function, load UIWebView with an inline html/css code i.e.
webview.hidden = false
WebView.loadHTMLString(htmlString, baseURL: nil)
-> After the above html is loaded into UIWebView, delegate function webViewDidFinishLoad is called. From webViewDidFinishLoad, take a snapshot of whatever is loaded in UIWebView into a Image with the code below:
var image:UIImage?
autoreleasepool{
UIGraphicsBeginImageContextWithOptions(WebView.frame.size, false, 1.0)
WebView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
-> Store the captured image into a UIImage variable.
-> Load one more different html/css code into the same UIWebView ( I am still in the webViewDidFinishLoad function): the html loading code is same as above i.e.
WebView.loadHTMLString(htmlString1, baseURL: nil)
-> webViewDidFinishLoad is called again because of new html code loaded in above step, and I do the same thing i.e. take a snapshot of the UIWebView content into UIImage variable and load new html pattern
-> I do this 4 times i.e. in the end I have 4 Images captured in 4 UIImage variables. I load all these 4 images into the 4 UIImageView on my storyboard.
-> Then I dismiss imagepicker
imagePicker.dismissViewControllerAnimated(true, completion: nil)
Here is the issue I am seeing in the end:
-> Sometimes, Image1 is same as Image 2, and sometimes all Images are the same. This happens randomly. I know for sure that all these 4 images should be unique. Because I load different html code in each step.
What Am I doing wrong in the sequence above?
A couple things you might try:
1) Don't reload webView in webViewDidFinishLoad. Instead wait until the next run loop on main thread (allowing iOS to fully finish). In objective C, my code would look like
dispatch_async (dispatch_get_main_queue(), ^{
// call method to load new html
});
2) I've had issues with webView not being refreshed by the time webViewDidFinishLoad was called. I solved this by adjusting the webView frame. Goofy, yes. This was back in iOS 6, so I have no idea if it makes any difference anymore or would affect what you are doing.
[webView loadHTMLString: html baseURL: nil];
// Play with frame to fix refresh problem
CGRect frame = webView.frame;
webView.frame = CGRectZero;
webView.frame = frame;
So here's a nice one. I'm creating a imageView by doing this:
var tagView = Titanium.UI.createImageView({
backgroundImage: 'http://www.travelandtourworld.com/wp-content/uploads/2013/07/google-logo.jpg',
height:150,
width:365,
zIndex:10000
});
The problem is - anytime I use a remote URL as a background image it doesn't show up. Has anyone run into this and is there a good workaround for it?
It's just a rough guess but does it work when you use a normal View instead of an ImageView? Or try the image-property instead of backgroundImage-property for the ImageView. I just think that a background image is not the best practice to do on an ImageView even though the docs say it's possible.
I've done some testing with this as well and also found that backgroundImage doesn't work for remote URLs.
I've sort of fixed it by hacking this code into TiUtils.m of the Appcelerator core (tested with 3.5.0.GA).
if (resultImage == nil) {
if ([image isKindOfClass:[NSString class]]) {
NSURL* imageURL = [TiUtils toURL:image relativeToURL:nil];
resultImage = [[ImageLoader sharedLoader] loadRemote:imageURL];
}
}
I try to take screenshot of uiwebview and send it with observer to another UIImageView in another class.
I using this method to take screenshot:
-(UIImage*)takeScreenshoot{
#autoreleasepool {
UIGraphicsBeginImageContext(CGSizeMake(self.view.frame.size.width,self.view.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[self.webPage.layer renderInContext:context];
UIImage *__weak screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
}
But then I have problem. Everytime I take screenshot with this method, memory rate grows about 10-15mb and it's not realising it. And if I take screenshot in every webviewdidfinishload, you can imagine how much it can take memory!
How can I fix that issue?
If possible try to use [UIScreen snapshotViewAfterScreenUpdates] which returns UIView .
This is the snapshot of currently displayed content (snapshot of app).
Even apple also says " this method is faster than trying to render the contents of the screen into a bitmap image yourself."
According to your code, you are passing this bitmap image to just display in some other UIImageView. so i think using UIScreen method is appropriate here.
To display UIWebView part only->
Create another UIView instance. Set its frame to the frame of your webView.
Now add this screenShot view as subView to createdView and set its frame such that webView portion will be displayed.
Try calling CGContextRelease(context); after you have got your screen shot.
Or as #Greg said, remove the line and use UIGraphicsGetCurrentContext() directly
Is anyone using this delegate method ? I get callbacks on
- (BOOL)textViewShouldBeginEditing:(UITextView *)textView
But not on this one. The documentation seems a bit ambiguous about what this is intended for
- (BOOL)textView:(UITextView *)textView shouldInteractWithTextAttachment:(NSTextAttachment *)textAttachment inRange:(NSRange)characterRange
According to the documentation on the Web this is what its intended for:
Discussion
The text view calls this method if the user taps or long-presses the text attachment and its image property is not nil. Implementation of this method is optional. You can use this method to trigger an action in addition to displaying the text attachment inline with the text.
And here is Xcode 5 documentation:
Asks the delegate if the specified text view should display the provided text attachment in the given range of text.
The text view calls this method when a text attachment character is recognized in its text container by a data detector. Implementation of this method is optional. You can use this method to trigger an alternative action besides displaying the text attachment inline with the text in the given range.
EDIT:
Mmm OK I figured out the problem. If I paste an image in from iOS then it works, however if the image was pasted in from OS X it does not. It seems that the actual attachment formats used are not quite the same on both platforms despite the fact that the image appears to show up correctly in the text views. On closer inspection the NSTextAttachment classes don't appear to be the same on iOS as on OS X.
If anyone can shed any light on the cross platform compatibility here please do.
Also if I save the attributed string after pasting the image in on iOS and then retrieve it and display it in the UITextView interaction with the attachment is no longer possible. It would appear that when storing the image the image is actually placed in contents if contents is nil. So maybe I am going to have to iterate through all attachments to check what data is stored where particularly to figure out any differences in behaviour across the OS X and iOS platforms.
FURTHER EDIT:
The method only gets called if the attachment image is NOT nil and despite the fact that an image is displayed the actual image attribute can actually be nil, silly me! Anyway the fix seems to be to check all the attachments in the attributed string and to set their image attribute to something, usually the contents of the fileWrapper. The default NSTextAttachment behaviour seems to be to store the image in the fileWrapper when its archived but it does not do the reverse when its unarchived. Anyway I want to retain the original image in the attachment but depending on the device display a suitably scaled version of the original !
The chief thing is that the text view's editable property must be NO and it's selectable property must be YES. Those are the only circumstances under which this delegate method is called. If you are getting shouldBeginEditing then your text field is editable which is exactly what it must not be.
Here is what I do to ensure the NSTextAttachments image attribute gets set when restoring the UITextView's attributed string from archived data (in this case whenever the user selects a record from a Core Data store).
I set the UITextView up as a delegate for textStorage and in the didProcessEditing look for any attachments that may have been added and then check that their image attribute is set. I am also setting the scaling factor on the image to make sure the image scales appropriately for the device.
This way I don't loose the original resolution of the image and if the user wants to view it in more detail I provide the option to open it in an image browser window from a popup menu.
Hope this helps someone else.
EDIT:
Check here for more details on NSTextView and UITextView http://ossh.com.au/design-and-technology/software-development/
- (void)textStorage:(NSTextStorage *)textStorage didProcessEditing:(NSTextStorageEditActions)editedMask range:(NSRange)editedRange changeInLength:(NSInteger)delta {
//FLOG(#"textStorage:didProcessEditing:range:changeInLength: called");
[textStorage enumerateAttributesInRange:editedRange options:NSAttributedStringEnumerationLongestEffectiveRangeNotRequired usingBlock:
^(NSDictionary *attributes, NSRange range, BOOL *stop) {
// Iterate over each attribute and look for a Font Size
[attributes enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop) {
if ([[key description] isEqualToString:NSAttachmentAttributeName]) {
NSTextAttachment *attachment = obj;
//Reset the image attribute and scale for the device size if necessary
[self resetAttachmentImage:attachment];
}
}];
}];
}
- (void)resetAttachmentImage:(NSTextAttachment*)attachment {
UIImage *image = [attachment image];
float factor = 2;
if (image == nil) {
if (attachment.fileWrapper == nil || !attachment.fileWrapper.isRegularFile) {
attachment.image = [UIImage imageNamed:#"unknown_attachment.png"];
return;
}
//Usually retrieved from store
image = [UIImage imageWithData:attachment.fileWrapper.regularFileContents];
} else {
// Convert any pasted image
image = [UIImage imageWithData:UIImagePNGRepresentation(image)];
}
float imgWidth = image.size.width;
// If its wider than the view scale it to fit
if (imgWidth > _viewWidth) {
factor = imgWidth / _viewWidth + 0.5;
attachment.image = [UIImage imageWithData:UIImagePNGRepresentation(image) scale:factor];
} else {
attachment.image = [UIImage imageWithData:UIImagePNGRepresentation(image) scale:_scale];
}
}
I'm working on my first IOS app. The app downloads and displays data from a database via a PHP web page. That's all working fine. I also grab an image from the same web server to display in a UIImageView. This all works on fine on the Simulator.
On my test device (a 3GS), everything works except I cannot get the downloaded image to display in my UIImageView.
If there is no internet connection, I am able to display my alternate image that I've included in app's bundle on the simulator and on the device.
// Inside My data class Implementation
- (void)setUpTheData
{
--- other code ---
NSURL *myImageURL = [NSURL URLWithString:myImageURLString];
NSData *myImageRawData = [NSData dataWithContentsOfURL:myImageURL];
self.myImageData = myImageRawData;
}
- (NSData*)getTheImageData
{
return _myImageData;
}
// Inside my viewDidLoad
UIImage *theImage = [UIImage imageWithData:[theRemoteData getTheImageData]];
_testImage.image = theImage;
I've compared the image data from both the simulator and the device and they are the same.
Try viewDidAppear instead of viewDidLoad maybe?
I had set a weak pointer in my imageData property in my data class. Changing to strong fixed my problem.