I have a GameViewController with some different subviews representing game object, such as scores, players, history, etc...
The user can drag around custom shapes on the GameBoard (CAShapeLayer).
I want to add a blur to the custom shapes. However, to first blur the background with all the game elements and then mask the image to the custom shape is horrible slow.
Is this possible to do with good performance, and how would you do it in that case?
Thanks
You should use GPUImage library if you are concerned about performance. https://github.com/BradLarson/GPUImage
This tutorial will guide you completely about how to use the blur functionality you need:
http://www.raywenderlich.com/60968/ios-7-blur-effects-gpuimage
Related
I'm consider the following usage scenario:
The user adds a new shape to a page
He or she places the shape freely on the current view (e.g. using a pan gesture)
Also, the user can resize the shape (e.g. using a pinch gesture on it)
The shape snaps to already existing shapes on the drawing
Implementation details: A page is a custom UIView. The user can add as many shapes as he or she likes. Shapes are instances of different custom UIViews which needs to adapt their content to the new size (see step 3). Overlapping shapes are not possible. And the user must be able to select shapes for operations like "remove".
Question:
I could simply instantiate the shape UIViews and add them as subview to the page UIView. But then I would need to implement all the details like tracking touches to move UIViews to new positions, running the collision detection, handle resize and so on.
Are there controls/classes/... in UIKit, CoreGraphics or the like available which would make my life easier?
I found UIDynamicBehaviors in UIKit. Would this is be way to go or would you suggest a different approach?
Thanks a lot in advance!
Cheers,
Chris
So I'm looking into UIKit Dynamics and the one problem I have run into is if I want to create a UIView with a custom drawRect: (for instance let's say I want to draw a triangle), there seems to be no way to specify the path of the UIView (or rather the UIDynamicItem) to be used for a UICollisionBehavior.
My goal really is to have polygons on the screen that collide with one another exactly how one would expect.
I came up with a solution of stitching multiple views together but this seems like overkill for what I want.
Is there some easy way to do this, or do I really have to stitch views together?
Dan
Watch the WWDC 2013 videos on this topic. They are very clear: for the sake of efficiency and speed, only the (rectangular) bounds of the view matter during collisions.
EDIT In iOS 9, a dynamic item can have a customized collision boundary. You can have a rectangle dictated by the frame, an ellipse dictated by the frame, or a custom shape — a convex counterclockwise simple closed UIBezierPath. The relevant properties, collisionBoundsType and (for a custom shape) collisionBoundingPath, are read-only, so you will have to subclass in order to set them.
If you really want to collide polygons, you might consider SpriteKit and its physics engine (it seems to share a lot in common with UIDynamics). It can mix with UIKit, although maybe not as smoothly as you'd like.
I am trying to capture an Image from Camera show it on the UIImageView.
After that I have some buttons for e.g. "Paint Brush", "Eraser", "Undo", "Save".
Using Brush I want to mark some items on the image captured.
What is the best way to accomplish the annotation and then save the image.
I am not sure what should be used. Should I use touchesbegan/end etc.. or some other alternative.
Regards,
Nirav
U need to understand Bézier Path Basics.Search it on Google or Apple Documentation.
The UIBezierPath class is an Objective-C wrapper for the path-related features in the Core Graphics framework. You can use this class to define simple shapes, such as ovals and rectangles, as well as complex shapes that incorporate multiple straight and curved line segments.
I have implemented the ability to blur images in my iOS app using the pinch gesture, however I would like to implement a circular white overlay that is commonly used as a reference point with the pinch gesture so that the user can adjust the amount of blur. Just like the image below:
The image above was from: https://media.tumblr.com/tumblr_lutwauVUW31qm4rc3.png
How can I implement this feature?
Thanks!
GaussianSelectiveBlurFilter in GPUImage lib may be help yo a lot. well,here is the github source.
I think it is not hard to use, hope you will enjoy it.
You can use a GPUImageVignetteFilter, and set the vignette color to white.
I'm guessing you're implementing the blur with GPUImageGaussianSelectiveBlurFilter within GPUImage (because I see you tagged GPUImage in your question). If you are, you'll notice that the properties on GPUImageGaussianSelectiveBlurFilter don't exactly translate over to GPUImageVignetteFilter, so you'll have to do a bit of math to translate to a new "coordinate" system, but it's fairly trivial.
I am looking into making an iOS app that has little creatures. I plan on having these creatures grow and change shapes based on user interaction. So the creatures could end up looking very different based off what the user does.
My problem is animating these creatures. I have dealt with simple animations in the past with cocos2d, but nothing like this.
How can I animate these creatures being different sizes and shapes without having my graphic designer draw every possible image that could be used. In the game spore a user can create an animal of whatever shape or size they want and these animals animate. My question is how can I do something similar in 2d? I know this can't be a simple answer, but a point in the right direction is all I am looking for.
You could use some CGAffineTransforms to scale your drawing and custom filters with Core Image maybe to change the color.