I've got some data that I would like to render on top of a MKMapView using OpenGL. Currently I can sort of achieve this by placing a transparent OpenGL layer on top of the MKMapView and drawing to it using OpenGL commands.
However, the problem becomes synching the drawing of the OpenGL layer with the drawing that MKMapView does. I can kind of get around this by drawing on touch events, this works well until you "flick" the map which causes a continuous series of draws for the animation that I don't detect.
Another idea was to use a MKOverlayView and hope that OpenGL drawing could be done with it. But I'm not sure how exactly to app
I would recommend evaluating BA3's "Altus" mapping engine. It is built entirely in OpenGL, so in the worst case you may simple render to the same context. However, it would probably be better if you could take advantage of their support for geo-located raster, vector and marker elements.
Full disclosure: I'm friends with the authors, but have no financial interest.
Related
I'm trying to recreate the barrel effect that can be seen on the camera mode picker below:
(source: androidnova.org)
Do I have to use OpenGL in order to achieve this effect? What is the best approach?
I found a great library on GitHub that can be used to achieve this effect (https://github.com/Ciechan/BCMeshTransformView), but unfortunately it doesn't support animation and is therefore not usable.
I bet Apple used CGMeshTransform. It's just like BCMeshTransform, except it is a private API and fully integrates with Core Graphics. BCMeshTransformView was born when a developer discovered this.
The only easy option I see is:
Use CALayer.transform, which is a CATransform3D. You can use this to simulate the barrel effect you want by adjusting the z position and y rotation of each item on the barrel. Also add a semitransparent dark gradient (CAGradientLayer) to the wheel to simulate the effect of choices getting darker towards the edges. This will be simple to do, but won't look as smooth and realistic as an actual 3D barrel. Maybe it will look good enough to create a convincing illusion though? (To enable 3D transforms, you need to enable depth by using view.layer.transform.m34 = 1/500.f or similar)
http://www.thinkandbuild.it/introduction-to-3d-drawing-in-core-animation-part-1/
The hardest option is using a custom OpenGL view that makes a barrel shape and applies your contents on top of it as a texture. I would expect that you run into most of the complexities behind creating BCMeshTransformView, and have difficulty supporting animations just like BCMeshTransformView did.
You may still be able to use BCMeshTransformView though. BCMeshTransformView is slow at processing content animations such as color changes, but is very fast at processing geometry changes. So you could use it to do a barrel effect, as long as you define the barrel effect entirely in terms of mesh geometry changes (instead of as content changes like using a scroll view or adjusting subview positions). You would need to do gesture handling + scrolling yourself instead of using UIScrollView though, which is tricky and tedious to get right.
Considering the options, I would want to fudge it by using 3D transforms, then move to other options only if I can't create a convincing illusion using 3D transforms.
I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage (by manipulating pixel data) or drawing the images with Core Graphics and changing the fill color on touch.
With UIImage, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage, but I can probably figure out. I open to tips though...
With CoreGraphics, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)
I'd suggest using Core Graphics.
Instead of images, define the shapes using CGPath or NSBezierPath, and use Core Graphics to stroke and/or fill the shapes. Filling shapes is then as easy as switching drawing mode from just stroking to stroking and filling.
Creating even more complex shapes is made much easier with the "PaintCode" app (which lets you draw and creates the path code for you).
As your first app, I would suggest something with a little less custom graphics fiddling, though.
My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.
I want to make a 3D metal compass in iOS which will have a movable cover.
That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed. I have attached a sketch about what I'm thinking.
Is it possible using core animations and CALayers? Or would I have to use OpenGL ES?
First you should obviously create a textured 3d model in app like 3Ds Max or Maya. Then export it to some suitable format. The simplest one is OBJ (there are lots of examples about how to load it). There are two options about animation:
Do animation manually by rotating the cover object. It's probably the easiest way to do that.
Create animation in you 3D editor and then interpolate between frames. By doing this you can get much more realistic view. However in this case OBJ format is not suitable, but COLLADA is. To load it I suggest to use Assimp library.
And if you don't need some advanced interraction another option is to use pseude 3D: just pre render all the compass animation frames and use that animation applied to 2d texture.
I'm struggling with conceptualizing animations with a CALayer as opposed to UIView's own animation methods. Throw "Core Animation" into this and, well, maybe someone can articulate these concepts from a high level so I can better visualize what's happening and why I'd want to migrate UIView animations (which I'm quite familiar with now) to CALayer animations on the iPhone. Every view in Cocoa-Touch automatically gets a layer. And, it seems, you can animate one and/or the other?!? Even mix them together?!? But why? Where's the line? What's the pro/con to each?
The Core Animation Programming Guide immediately jumps into Layer & Timing Classes and I think need to take a step back and understand why these varied pieces exist and how relate to each other.
Use views for control and layers for eye candy. Layers don't receive events so it's easier to use a view for those cases, but when you want to animate a sprite or backgrounds, etc., layers make sense. Events pass right through layers to the backing view so you can have a pretty visual representation without messing up your events. Try to overlay a view that you're just using for visual representation and you'll have to pass tap events through to the underlying view yourself.
An UIView is always rendered to a CALayer. When you use UIView methods to animate a view, you are effectively manipulating the underlying CALayer.
If you need to do simple things, use the UIView methods. For more complex situatins, or if you want layers not associated with any view in particular, use CALayers.
I've done a bunch of apps in the past year. Here's my rule of thumb:
Use UIView until it doesn't do what you want.
Then move to CoreAnimation. But before you get into it too much...
If you write more than a few animations, use Cocos2D.
UIView transforms are only 2D and are restricted to that, LAyer transforms however can be 3D and you should use those if you want to do 3D stuff, UIView animation will work if you change either the UIView transform or the CALayer transform. So at a basic level, you can do a lot more manipulation when you are working with a Layer rather than the View.
I am not sure if I am misunderstanding Chris' response to "What's Cocos2D doing better? Don't you have other problems then, regarding the touch event handling and many other stuff that misses in openGL ES?"
It sounds like the answer suggests Cocos2D is not based on the OpenGL ES framework when in fact it actual is. While it is a great 2D game engine it does implement OpenGL for much of it's rendering - attached to a physics library it allows for a lot of very interesting possibilities for animation - and Chris is correct - it is a lot less coding indeed.