In Apple's Pages app it allows you to add an image or text box or shape layer to the page then resize it by tapping on it and the handles appear. A similar thing also happens in the Pixelmator app and a few others. Is this something made by Apple that I can use in my app or would I have to build it in myself?
As far as I know there is no system support for resize handles and you will need to build it yourself. That's what I've done when I needed them. I added views on top of the thing that I wanted to resize, with pan gesture recognizers attached.
I have an app called CIFilterTest on Github (written in Objective-C, unfortunately) that uses resize handles to let the user move around points and rects when they are needed for the various Core Image filters. Even though it's written in Objective-C it should give you the idea.
Note that most Core Image filters run VERY slowly on the simulator, making the app seem extremely laggy. That's an artifact of running Core Image filters on the simulator. It runs much faster on an actual iOS device.
Related
I have been researching for a while, and I have found a lot of questions and answers on how to use a gesture recognizer on a UIImageView in iOS to make the image interactive. However, I am hoping to make an interactive image that reacts differently depending on the part of the image that is tapped (think of an interactive map or an interactive image of the human body).
I am still in the planning phase, but I cannot think of a way to do this. My first idea was to use UIButtons over the image, but I cannot think of a way to make the constraints allow the buttons to remain stationary over the image for different devices. I am looking for ideas, either on another way of doing this, or a way to make the constraints work.
I have lots of views - sub classes (UILabel, UIVIew, UIButton etc) using core graphics (within drawrect).
These show fine within normal interface builder.
However they do not show in the preview assistant editor for the storyboard (where you can see how views should look on actual devices).
I've been doing some research and found posts suggesting prepareForInterfaceBuilder should be used. However this doesn't show in preview for me.
I asked this question some years ago, IB_DESIGNABLE, having views show in preview? however I can't reproduce this, I'm not convinced it did work back then either.
I'm still following the same approach, with a framework and that link shows my implementation.
I know that prepareForInterfaceBuilder should be used to do something different, to show something basic. Therefore I believe that core graphics won't work.
However, I can't even get a simple change in background color to work in the preview, although it does work in normal interface builder.
I would like to know if this is a bug (that core graphics can not be used in preview) or still a limitation in xcode?
At the very least I'd like to do something simple (like a change in background color). I have a lot of views and an impossible task to make auto layout changes.
Previously I was producing lots of screen shots, different devices, languages etc (via automation) just so I could see my auto layout changes, which is really slow way to work.
I can verify this is a long standing bug. Apple doesn't care. Custom view be damned. All those dollars spent for more broken Xcode functionality.
Android?
Daniel
I would like to know when the screen is being drawn on iOS. In particular, I'd like to know if there are any visible changes being drawn on screen. This can be handy to know how long a page took to render, for example (assuming that the user is not interacting with the page). I would like to be able to capture this information in a regular production build, not in a developer build. And I'd like this to be a general solution applicable to most any page in my app, not just a specific page.
For example, I have a page that 1) asynchronously queries an API for data, 2) displays that data in a UITableView where some of the entries may be offscreen, and then 3) asynchronously downloads the images for each of the visible items on the screen. I want to get callbacks when the UITableView is rendered and when all of the images are rendered. The total time to render the page can be determined by looking at the timestamp of the last call to the callback (again, assuming no user interaction).
On Android, this is fairly simple. You can use ViewTreeObserver.addPreDrawListener to get a callback whenever the screen is being drawn. If there's no visible change to the screen, the callback is not called.
On iOS, it looks like CADisplayLink might potentially serve a similar purpose. However, when I hook up my CADisplayLink, it appears to be called over-and-over forever, whether or not there are visible changes on the screen.
Is there a way to know when there are visible changes to the screen being drawn in iOS?
In iOS 9 Apple made it impossible to get access to things drawn onto the screen outside of your app. Prior, it was possible to use an API called IOSurface to do it, but Apple closed it down in iOS 9. (To prevent apps from snooping on each other.)
So if you're talking about ANYTHING being drawn to the screen the answer is no. If you're looking for changes within your app there's probably a way to do it.
My iOS app (objective-C) handles photos. I'd like it to be able offer the user a way to automatically "adjust" an image, like how iOS itself does in the Photos app (little magic-wand icon), or how facebook does it. This basically means auto-brightness and auto-contrast adjustment.
So far i've found "filtrr" (more concerned with adding color it seems), and OpenCV (uhh, feels like using a nuclear missile to swat a fly with). Any other hints? Is there some library or a way of even doing this natively in iOS?
thx!
Look into Core Image for info on filters and how to apply them. Apple's programming guide is a good place to start.
Once you're up and running with Core Image, see the autoAdjustmentFilters method for getting a set of filters that's preconfigured for "one touch enhance" kinds of usage.
I am working with a small camera app for a client and I have now finished all functionality of it. In the standard camera controls i need to modify one thing , the cancel button should say gallery instead.
But unless i am missing something i will need to remove the overlay by setting showsCameraControls to NO and then building my entire overlayView from scratch.
I have found this solution but I am afraid to go this route due to the warning in the beginning of the post.
So is there any valid way of doing simple small modifications to the existing camera overlay control UI or do you have to build it from scratch if you need to change one tiiiiiiny thing?
Unfortunately, having been in this situation I can safely say you need to build the controls from scratch. You really only have two options: create your own camera overlay, or use the default one.
Now, you could use the techniques described in the link you cite, and iterate through the various subviews and modify them 'blind'. The rather large danger with this is every time Apple change the internal structure of the image picker it could potentially break your solution. So I'd definitely stay clear of it.