Is this official iOS coordinate system or its the case only when working with CoreGraphic?
(X positive is on the right and Y positive is down)
This is the UIKit coordinate space. Core Graphics (also Core Text) puts the origin in the lower left by default. On iOS it is common for the coordinate space to be flipped for Core Graphics so that it matches UIKit.
Yes, it uses modified coordinates
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_overview/dq_overview.html#//apple_ref/doc/uid/TP30001066-CH202-TPXREF101
Related
I am getting images just like the one in this question:
Confused about thread_position_in_grid
The dark black color is in the lower-left corner, which means the gid.x and gid.y are both 0 in that part of the image. Yet the Metal docs (as well as answers like this: What is the coordinate system used in metal?) state that the origin is in the upper left corner, which means that should be where you see the dark black corner.
Why does this seem to not be the case?
I view the texture using CoreImage. I make a CIImage like so:
CIImage(mtlTexture: kernelOutputTexture,
options: [CIImageOption.colorSpace: colorSpace])!
And then I view it in NSImageView like so:
let rep = NSCIImageRep(ciImage: image)
let nsImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
Core Image uses a coordinate system whose origin is in the bottom-left of the image. Although you might suspect that it would account for Metal's top-left origin when wrapping a MTLTexture, it appears this is not the case.
To create a CIImage whose orientation matches that of the original texture, you can request a new image that has been vertically flipped:
image = image.transformed(by: image.orientationTransform(for: .downMirrored))
The orientationTransform(for:) method was introduced in iOS 11.0
and macOS 10.13. Using it is likely more robust than composing your own affine transform, and is also quite efficient, since Core Image uses lazy evaluation rather than physically moving or copying the underlying data.
I'm trying to use Kudan AR in a project, and I have a couple questions:
1) The marker size relation to the scene seems pretty weird to me. For example, I'm using a 150x150 px image as a marker, and when I use it in the scene it occupies 150 unities! It requires all my objects to be extremely huge, sometimes even extending further than the camera far plane, which breaks the augmentation. Is it correct, or am I missing something?
2) I'm trying to use a marker to define the starter position of the augmentation, and then switch to the markerless tracking to have a broader experience. They have a sample code using the native iOS lib (https://wiki.kudan.eu/Marker_to_Markerless), but no reference on how to do it in Unity. That's what I'm trying:
markerlessDriver.localScale = new Vector3(markerDriver.localScale.x, markerDriver.localScale.x, markerDriver.localScale.z);
markerlessDriver.localPosition = markerDriver.localPosition;
markerlessDriver.localRotation = markerDriver.localRotation;
target.SetParent(markerlessDriver);
tracker.ChangeTrackingMethod(markerlessTracking);
// from the floor placer.
Vector3 floorPosition; // The current position in 3D space of the floor
Quaternion floorOrientation; // The current orientation of the floor in 3D space, relative to the device
tracker.FloorPlaceGetPose(out floorPosition, out floorOrientation);
tracker.ArbiTrackStart(floorPosition, floorOrientation);
It switches, but the position/rotation of the model goes off. Any idea on how that can be done?
Thanks in advance!
I use CoreImage on iOS for face detection. I already did this using this helpful tutorial
. My problem is I added it in a View Controller, I'm able to rotate the image to match the circle drawn over the eyes and mouth but I can't rotate the whole view controller. Is there a better approach.
My image is look like this.
I want to rotate it upside down.
I'm using storyboard and ios7
Actually the coordinate system of the CoreImage & iOS is different. CoreImage is using exactly opposite coordinate systems as iPhone so you must convert the mouth & eye points according to the iOS coordinate system.
Refer the 5th point here:-
5) Adjust For The Coordinate System
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5
When overriding drawRect I've found that the coordinates there use 0,0 as the upper left.
But the Apple UIView Programming Guide says this:
Some iOS technologies define default coordinate systems whose origin point and orientation differ from those used by UIKit. For example, Core Graphics and OpenGL ES use a coordinate system whose origin lies in the lower-left corner of the view or window and whose y-axis points upward relative to the screen.
I'm confused; are they talking about something different than Quartz when they refer to Core Graphics here?
"Core Graphics" in this documentation means "Quartz", yes. It's just an oversimplification.
When you create a CGContext yourself, its coordinate system has the origin in the bottom-left. When UIKit creates the CGContext for drawing into a view, it helpfully flips the coordinate system before calling -drawRect:.
Core Graphics and Quartz on iOS are, as far as coordinates go, the same thing. The iOS Technologies Guide says so:
Core Graphics (also known as Quartz)...
The Core Graphics framework (CoreGraphics.framework) contains the interfaces for the Quartz 2D drawing API. Quartz is the same advanced, vector-based drawing engine that is used in Mac OS X.
The distinction is that, technically, Quartz is the technology or the mechanism, and "Core Graphics" is the name of the framework. (On Mac OS, of course, there's actually a "Quartz" framework, which is just an umbrella.)
For the benefit of others finding this thread:
There is a full explanation of the coordinate systems here: Coordinate Systems in Cocoa
It is not exactly helpful that they differ. There are methods to convert between coordinate systems at the various levels of view in your App! For example this finds the coordinates of the point that it at (20,20) in the visible screen on a zoomed image. The result is relative to the origin of the zoomed image which may now be way off in space.
croppingFrame.origin = [self convertPoint:CGPointMake(20.0, 20.0) fromCoordinateSpace:(self.window.screen.fixedCoordinateSpace)];
Greetings,
I'm working on an application inspired by the "ZoomingPDFViewer" example that comes with the iOS SDK. At some point I found the following bit of code:
// to handle the interaction between CATiledLayer and high resolution
// screens, we need to manually set the tiling view's
// contentScaleFactor to 1.0. (If we omitted this, it would be 2.0
// on high resolution screens, which would cause the CATiledLayer
// to ask us for tiles of the wrong scales.)
pageContentView.contentScaleFactor = 1.0;
I tried to learn more about contentScaleFactor and what it does. After reading everything of Apple's documentation that mentioned it, I searched Google and never found a definite answer to what it actually does.
Here are a few things I'm curious about:
It seems that contentScaleFactor has some kind of effect on the graphics context when a UIView's/CALayer's contents are being drawn. This seems to be relevant to high resolution displays (like the Retina Display). What kind of effect does contentScaleFactor really have and on what?
When using a UIScrollView and setting it up to zoom, let's say, my contentView; all subviews of contentView are being scaled, too. How does this work? Which properties does UIScrollView modify to make even video players become blurry and scale up?
TL;DR: How does UIScrollView's zooming feature work "under the hood"? I want to understand how it works so I can write proper code.
Any hints and explanation is highly appreciated! :)
Coordinates are expressed in points not pixels. contentScaleFactor defines the relation between point and pixels: if it is 1, points and pixels are the same, but if it is 2 (like retina displays ) it means that every point has two pixels.
In normal drawing, working with points means that you don't have to worry about resolutions: in iphone 3 (scaleFactor 1) and iphone4 (scaleFactor 2 and 2x resolution), you can use the same coordinates and drawing code. However, if your are drawing a image (directly, as a texture...) and just using normal coordinates (points), you can't trust that pixel to point map is 1 to 1. If you do, then every pixel of the image will correspond to 1 point but 4 pixels if scaleFactor is 2 (2 in x direction, 2 in y) so images could became a bit blurred
Working with CATiledLayer you can have some unexpected results with scalefactor 2. I guess that having the UIView a contentScaleFactor==2 and the layer a contentScale==2 confuse the system and sometimes multiplies the scale. Maybe something similar happens with Scrollview.
Hope this clarifies it a bit
Apple has a section about this on its "Supporting High-Resolution Screens" page in the iOS dev documentations.
The page says:
Updating Your Custom Drawing Code
When you do any custom drawing in your application, most of the time
you should not need to care about the resolution of the underlying
screen. The native drawing technologies automatically ensure that the
coordinates you specify in the logical coordinate space map correctly
to pixels on the underlying screen. Sometimes, however, you might need
to know what the current scale factor is in order to render your
content correctly. For those situations, UIKit, Core Animation, and
other system frameworks provide the help you need to do your drawing
correctly.
Creating High-Resolution Bitmap Images Programmatically If you
currently use the UIGraphicsBeginImageContext function to create
bitmaps, you may want to adjust your code to take scale factors into
account. The UIGraphicsBeginImageContext function always creates
images with a scale factor of 1.0. If the underlying device has a
high-resolution screen, an image created with this function might not
appear as smooth when rendered. To create an image with a scale factor
other than 1.0, use the UIGraphicsBeginImageContextWithOptions
instead. The process for using this function is the same as for the
UIGraphicsBeginImageContext function:
Call UIGraphicsBeginImageContextWithOptions to create a bitmap
context (with the appropriate scale factor) and push it on the
graphics stack.
Use UIKit or Core Graphics routines to draw the content of the
image.
Call UIGraphicsGetImageFromCurrentImageContext to get the bitmap’s
contents.
Call UIGraphicsEndImageContext to pop the context from the stack.
For example, the following code snippet
creates a bitmap that is 200 x 200 pixels. (The number of pixels is
determined by multiplying the size of the image by the scale
factor.)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0,100.0), NO, 2.0);
See it here: Supporting High-Resolution Screens