Core Graphics coordinate system - ios

When overriding drawRect I've found that the coordinates there use 0,0 as the upper left.
But the Apple UIView Programming Guide says this:
Some iOS technologies define default coordinate systems whose origin point and orientation differ from those used by UIKit. For example, Core Graphics and OpenGL ES use a coordinate system whose origin lies in the lower-left corner of the view or window and whose y-axis points upward relative to the screen.
I'm confused; are they talking about something different than Quartz when they refer to Core Graphics here?

"Core Graphics" in this documentation means "Quartz", yes. It's just an oversimplification.
When you create a CGContext yourself, its coordinate system has the origin in the bottom-left. When UIKit creates the CGContext for drawing into a view, it helpfully flips the coordinate system before calling -drawRect:.

Core Graphics and Quartz on iOS are, as far as coordinates go, the same thing. The iOS Technologies Guide says so:
Core Graphics (also known as Quartz)...
The Core Graphics framework (CoreGraphics.framework) contains the interfaces for the Quartz 2D drawing API. Quartz is the same advanced, vector-based drawing engine that is used in Mac OS X.
The distinction is that, technically, Quartz is the technology or the mechanism, and "Core Graphics" is the name of the framework. (On Mac OS, of course, there's actually a "Quartz" framework, which is just an umbrella.)

For the benefit of others finding this thread:
There is a full explanation of the coordinate systems here: Coordinate Systems in Cocoa
It is not exactly helpful that they differ. There are methods to convert between coordinate systems at the various levels of view in your App! For example this finds the coordinates of the point that it at (20,20) in the visible screen on a zoomed image. The result is relative to the origin of the zoomed image which may now be way off in space.
croppingFrame.origin = [self convertPoint:CGPointMake(20.0, 20.0) fromCoordinateSpace:(self.window.screen.fixedCoordinateSpace)];

Related

Is this iOS coordinate system?

Is this official iOS coordinate system or its the case only when working with CoreGraphic?
(X positive is on the right and Y positive is down)
This is the UIKit coordinate space. Core Graphics (also Core Text) puts the origin in the lower left by default. On iOS it is common for the coordinate space to be flipped for Core Graphics so that it matches UIKit.
Yes, it uses modified coordinates
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_overview/dq_overview.html#//apple_ref/doc/uid/TP30001066-CH202-TPXREF101

iPhone Build custom monocolor image with clear area at fixed coords

So what I want to do is build an image the size of the device. The image should be all blue except for a circle at coords (x,y) with a radius of z that should be clear, where x,y,z are variables. I know I should use the CGContext I just don't know how to get it done.
Ani ideas?
You want to use Core Graphics for this. To be more precise, read up on CGContext and the functions used to manipulate them. There are plenty of tutorials for it out there, and Apple provides a lot of sample code as well.

Upgrading from CoreGraphics

I've written my first iOS app, Amaziograph, which uses Core Graphics.
My app is a drawing app and it employs drawing a lot of lines (Up to 30 lines one by one, at different locations + some shadow to simulate brush blur, and it needs to appear as if all lines are drawn at the same time) with CG, which I find to be slow. In fact, when I switch to Retina and try drawing just a single line with my finger, I need to wait a second or so before it is drawn.
I realised that Core Graphics no longer meets my app's requirements as I'd like to make it use the Retina display's advantages and add some photoshop-style brushes.
My question is, is there a graphics library more faster and powerful than Core Graphics, but with simple interface. All I need is drawing simple lines with size, opacity, softness and possibly with some more advanced brushes. I'm thinking of OpenGL after I saw Apple's GLPaint app, but it seems a bit complicated for me with all those framebuffers, contexts and so on. I am looking for something that has similar to CG's ideology, so it wouldn't take much time rewriting my code. Also, right now I'm doing all my drawing in UIImage views, so it would be nice to draw on top of UIImages directly.
Here is an extract of the code I'm using to draw right now:
//...Begin image contest >> draw the previous image in >> set stroke style >>
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, lastPoint.x, lastPoint.y-offset);
CGContextAddLineToPoint(currentContext, currentPoint.x, currentPoint.y-offset);
CGContextStrokePath(currentContext);
//Send to an UIImage and end image contest...
You are not going to find another graphics library with better performance than Core Graphics for the iOS platforms. Most likely your application can be significantly optimised, there are many tricks to use. You might be interested in the WWDC video 506 from 2012:
http://developer.apple.com/videos/wwdc/2012/

Optimizing 2D Graphics and Animation Performance
They demonstrate a paint application using Core Graphics that works at full frame rate.

How can I implement the custom drawing search tool used in the Realtor iPad app?

The Realtor iPad app has done a very good job of implementing a custom drawing tool on top of mapkit that they use to query an area for homes. I am familiar with mapkit and its associated classes but I am unaware of how I could do some custom drawing with my finger and have it translate to a geospatial query. How to do it?
I'm not sure how far along you've made it with this, but your basic algorithm should look like this:
Draw a polygon overtop your map, then translate the coordinates of that polygon to "map" coordinates. In order to do that you would probably need to listen for gestures on a view other than the MKMapKit instance. With my limited knowledge of the MapKit's touch event handling you might have to overlay a different transparent view on the map when you want to draw, so touch events won't go through to the MapKit (if that makes any sense). You use your finger to pinch, zoom, pan and you won't want that functionality if you're trying to draw. In that view, you'll draw the shape tracing the user's finger, then translate the points drawn into map points.
The docs indicate that you can translate screen points to map points using the convertPoint:toCoordinateFromView: method on MKMapView.
Check this link for information on that: Trouble converting MapKit user coordinates to screen coordinates
This post provides a link that might help you with drawing the polygon:
To draw polygon on google map with MapKit framework
After you've drawn your polygon you'll want to "spatially" query your data. You could do that in several ways. Locally on the device or through a web service are two options. If your data is local to the device you'll have to do the cartographic math on your device. You'll also need to ensure that your point data (the X,Y's) are in the same projection and coordinate space as your polygon's information. Polygon intersection math is relatively straight forward to do, when your projections and coordinate systems line up.
Here's a link that can help you with the math.
https://math.stackexchange.com/questions/237/how-do-you-determine-if-a-point-sits-inside-a-polygon
Alternatively you could set up some web service that takes your polygon data and performs the same cartographic math on a server and returns the results to the device. Either way the same math needs to be performed. You'll take that polygon data and determine which records in your data intersect with that polygon.
This is pretty high-level, I know, but it should be all you need to do.
Another consideration is if your data is spatially enabled with spatialite compiled for SQLite on your device or SQL Server Spatial on your server. You should be able to query the data using that polygon data. You would have to format the query properly, though.
Lastly, I would encourage you to look into the ESRI SDK for iOS. ESRI provides drawing and sketching tools out of the box. Its not too difficult to use but one downside is that you would have to learn a new API:
http://resources.arcgis.com/en/communities/runtime-ios-sdk/

Using OpenGL on top of a MKMapView?

I've got some data that I would like to render on top of a MKMapView using OpenGL. Currently I can sort of achieve this by placing a transparent OpenGL layer on top of the MKMapView and drawing to it using OpenGL commands.
However, the problem becomes synching the drawing of the OpenGL layer with the drawing that MKMapView does. I can kind of get around this by drawing on touch events, this works well until you "flick" the map which causes a continuous series of draws for the animation that I don't detect.
Another idea was to use a MKOverlayView and hope that OpenGL drawing could be done with it. But I'm not sure how exactly to app
I would recommend evaluating BA3's "Altus" mapping engine. It is built entirely in OpenGL, so in the worst case you may simple render to the same context. However, it would probably be better if you could take advantage of their support for geo-located raster, vector and marker elements.
Full disclosure: I'm friends with the authors, but have no financial interest.

Resources