How do we do rectilinear image conversion with swift and iOS 11+ - ios

How do we use the function apple provides (below) to perform rectilinear conversion?
Apple provides a reference implementation in 'AVCameraCalibrationData.h' on how to correct images for lens distortion. Ie going from images taken with a wide-angle or telephoto lens to the rectilinear 'real world' image. A pictoral representation is here:
To create a rectilinear image we must begin with an empty destination buffer and iterate through it row by row, calling the sample implementation below for each point in the output image, passing the lensDistortionLookupTable to find the corresponding value in the distorted image, and write it to your output buffer.
func lensDistortionPoint(for point: CGPoint, lookupTable: Data, distortionOpticalCenter opticalCenter: CGPoint, imageSize: CGSize) -> CGPoint {
// The lookup table holds the relative radial magnification for n linearly spaced radii.
// The first position corresponds to radius = 0
// The last position corresponds to the largest radius found in the image.
// Determine the maximum radius.
let delta_ocx_max = Float(max(opticalCenter.x, imageSize.width - opticalCenter.x))
let delta_ocy_max = Float(max(opticalCenter.y, imageSize.height - opticalCenter.y))
let r_max = sqrt(delta_ocx_max * delta_ocx_max + delta_ocy_max * delta_ocy_max)
// Determine the vector from the optical center to the given point.
let v_point_x = Float(point.x - opticalCenter.x)
let v_point_y = Float(point.y - opticalCenter.y)
// Determine the radius of the given point.
let r_point = sqrt(v_point_x * v_point_x + v_point_y * v_point_y)
// Look up the relative radial magnification to apply in the provided lookup table
let magnification: Float = lookupTable.withUnsafeBytes { (lookupTableValues: UnsafePointer<Float>) in
let lookupTableCount = lookupTable.count / MemoryLayout<Float>.size
if r_point < r_max {
// Linear interpolation
let val = r_point * Float(lookupTableCount - 1) / r_max
let idx = Int(val)
let frac = val - Float(idx)
let mag_1 = lookupTableValues[idx]
let mag_2 = lookupTableValues[idx + 1]
return (1.0 - frac) * mag_1 + frac * mag_2
} else {
return lookupTableValues[lookupTableCount - 1]
}
}
// Apply radial magnification
let new_v_point_x = v_point_x + magnification * v_point_x
let new_v_point_y = v_point_y + magnification * v_point_y
// Construct output
return CGPoint(x: opticalCenter.x + CGFloat(new_v_point_x), y: opticalCenter.y + CGFloat(new_v_point_y))
}
Additionally apple states: "point", "opticalCenter", and "imageSize" parameters below must be in the same coordinate system.
With that in mind, what values do we pass for opticalCenter and imageSize and why? What exactly is the "applying radial magnification" doing?

The opticalCenter is actually named distortionOpticalCenter. So you can provide lensDistortionCenter from AVCameraCalibrationData.
Image size is a height and width of image you want to rectilinear.
"Applying radial magnification". It changes the coordinates of given point to the point where it will be with ideal lens without distortion.
"How do we use the function...". We should create an empty buffer with same size as the distorted image. For each pixel of empty buffer we should apply the lensDistortionPointForPoint function. And take a pixel with corrected coordinates from distorted image to empty buffer. After fill all buffer space you should get an undistorted image.

Related

Polar coordinate point generation function upper bound is not 2Pi for theta?

So I wrote the following function to take a frame, and polar coordinate function and to graph it out by generating the cartesian coordinates within that frame. Here's the code.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, cosScalar:Double, iPrecision:Double, largestScalar:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// cosScalar: The number to multiply the cos by.
// largestScalar: Largest cosScalar used in this frame so that scaling is relative.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: Double.pi * 2 , by: precision) { //TODO: Try to recreate continuity. WHY IS IT NOT 2PI
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
// newvalue = (max'-min')/(max-min)*(value-max)+max'
let scaled_x = (Double(frame.width) - 0)/(largestScalar*2)*(x-largestScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(largestScalar*2)*(y-largestScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
print("Done points")
return points
}
The polar function I'm passing is r = 100*cos(9/4*theta) which looks like this.
I'm wondering why my function returns the following when theta goes from 0 to 2. (Please note I'm in this image I'm drawing different sizes flowers hence the repetition of the pattern)
As you can see it's wrong. Weird thing is that when theta goes from 0 to 2Pi*100 (Also works for other random values such as 2Pi*4, 2Pi*20 but not 2Pi*2 or 2Pi*10)it works and I get this.
Why is this? Is the domain not 0 to 2Pi? I noticed that when going to 2Pi*100 it redraws some petals so there is a limit, but what is it?
PS: Precision here is 0.01 (enough to act like it's continuous). In my images I'm drawing the function in different sizes and overlapping (last image has 2 inner flowers).
No, the domain is not going to be 2π. Set up your code to draw slowly, taking 2 seconds for each 2π, and watch. It makes a whole series of full circles, and each time the local maxima and minima land at different points. That's what your petals are. It looks like your formula repeats after 8π.
It looks like the period is the denominator of the theta coefficient * 2π. Your theta coefficient is 9/4, the denominator is 4, so the coefficient is 4*2π, or 8π.
(That is based on playing in Wolfram Alpha and observing the results. I may be wrong.)

Get 3D model's height and its transformed screen coordinates

I'm rendering a Collada (*.dae) file with ARKit. As an overlay of my ARSCNView I'm adding a SKScene that simply shows a message bubble (without text yet).
Currently, I know how to modify the position of the bubble so that it looks like it's always at the feet of my 3D model. I'm doing like this:
func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
if let overlay = sceneView.overlaySKScene as? BubbleMessageScene {
guard let borisNode = sceneView.scene.rootNode.childNode(withName: "boris", recursively: true) else { return }
let boxWorldCoordinates = sceneView.scene.rootNode.convertPosition(borisNode.position, from:sceneView.scene.rootNode.parent)
let screenCoordinates = self.sceneView.projectPoint(boxWorldCoordinates)
let boxY = overlay.size.height - CGFloat(screenCoordinates.y)
overlay.bubbleNode?.position.x = CGFloat(screenCoordinates.x) - (overlay.bubbleNode?.size.width)!/2
overlay.bubbleNode?.position.y = boxY
}
}
However my bubble is always at the feet of the 3D model because I can only get the SCNNode position of my model, where it is anchored. I would like it to be at the head of my model.
Is there a way I can get the height of my 3D model, and then its transformed screen coordinates, so no matter where I am with my phone it looks like the bubble message is always next to the head?
Each SCNNode has a boundingBox property which is the:
The minimum and maximum corner points of the object’s bounding box.
So what this means is that:
Scene Kit defines a bounding box in the local coordinate space using two points identifying its corners, which implicitly determine six axis-aligned planes marking its limits. For example, if a geometry’s bounding box has the minimum corner {-1, 0, 2} and the maximum corner {3, 4, 5}, all points in the geometry’s vertex data have an x-coordinate value between -1.0 and 3.0, inclusive.
If you look in SceneKit Editor you will also be able to see the size of your model in meters (I am saying this simply as a point you can refer to in order to check the calculations):
In my example I am using a Pokemon model with the size above.
I scaled the model (which you likely did as well) e.g:
pokemonModel.scale = SCNVector3(0.01, 0.01, 0.01)
So in order to get the boundingBox of the SCNNode we can do this:
/// Returns The Original Width & Height Of An SCNNode
///
/// - Parameter node: SCNNode
func getSizeOfModel(_ node: SCNNode){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
}
Calling it like so:
getSizeOfModel(pokemonModel)
Now of course since our SCNNode has been scaled this doesn't help much so obviously we need to take this into account, by re-writing the function:
/// Returns The Original & Scaled With & Height On An SCNNode
///
/// - Parameters:
/// - node: SCNode
/// - scalar: Float
func getOriginalAndScaledSizeOfNode(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
}
Which would be called like so:
getOriginalAndScaledSizeOfNode(pokemonModel, scalar: 0.01)
Having done this you say you want to position a 'bubble' above your model, which could then be done like so:
func getSizeOfNodeAndPositionBubble(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
//3. Create A Buubble
let pointNodeHolder = SCNNode()
let pointGeometry = SCNSphere(radius: 0.04)
pointGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
pointNodeHolder.geometry = pointGeometry
//4. Place The Bubble At The Origin Of The Model, At The Models Origin + It's Height & At The Z Position
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled, node.position.z)
self.augmentedRealityView.scene.rootNode.addChildNode(pointNodeHolder)
}
This yields the following result (which I also tested on a few other unfortunate Pokemon as well):
You will probably want to add a bit of 'padding' as well to the calculation, so that the node is a bit higher up than the top of the model e.g:
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled + 0.1, node.position.z)
I am not great at Maths, and this uses an SCNNode for the bubble rather than an SKScene, but hopefully it will point you in the right direction...
you can get borisNode.boundingBox : (float3, float3) to calculate the size of the node, you get a tuple of 2 points, then calculate the heigh by subtracting the y from one point from the other. Finally move your overlay's Y position by the number you get.

how to set dots using uiimage or uibutton

Here i have some json data like bone = 24. based on this json value i have to change some small dots to big dots as i show in my below image.
Like my above image . (please consider image 1). it have 10 big dots. And based on that bone.label value i have to change some big dots to small dots.
My doubt is:
How to set that dot image using (uiimage or uibutton). my idea is to set 10 uiimage and then using if statement condition i will change big dots to small dots. but if i do in this way i am not able to set constraints. is they any other idea to dots in above my cirecle??
Do it in code, not in Interface builder, and without constraints. I doubt that working with circular relations is working well with autolayout constraints.
If you position the dot images with their center instead of the frame, their location is also not changing with their size.
Use the same size for all of your UIImageViews and then just change the image to a small or large dot. Set the content mode of the UIImageViews to center, so that the images won't be scaled to fit the size of the UIImageView.
UPDATE
You could solve the positioning of the dots by drawing them in a UIView instead of using constraints. Here's a function that returns an array with CGPoints evenly distributed around a circle, which you could use when drawing the dots:
func generatePoints(totalPoints: Int, center: CGPoint, radius: Double) -> [CGPoint] {
let arc: Double = 360
let startAngle: Double = 180
let mpi: Double = M_PI/180
var startRadians: Double = startAngle * mpi
let incrementAngle: Double = arc/Double(totalPoints)
let incrementRadians = incrementAngle * mpi
var points = [CGPoint]()
var currentPoint = totalPoints
while currentPoint > 0 {
currentPoint--
let xp = CGFloat(ceil(Double(center.x) + sin(startRadians) * radius))
let yp = CGFloat(ceil(Double(center.y) + cos(startRadians) * radius))
points.append(CGPointMake(xp, yp))
startRadians -= incrementRadians
}
return points;
}
Objective-C version (somewhat ugly, because it's from an ooooooold project of mine):
+ (NSArray*)generatePoints:(NSInteger)totalPoints center:(CGPoint)center radius:(NSInteger)radius {
float circleradius = (float)radius;
const float arc = 360.f;
float startAngle = 180.f;
float mpi = M_PI/180.f;
float startRadians = startAngle * mpi;
float incrementAngle = arc/(float)totalPoints;
float incrementRadians = incrementAngle * mpi;
NSMutableArray *pts = [[NSMutableArray alloc] initWithCapacity:totalPoints];
while (totalPoints--) {
float xp = ceilf(center.x + sinf(startRadians) * circleradius);
float yp = ceilf(center.y + cosf(startRadians) * circleradius);
[pts addObject:[NSValue valueWithCGPoint:CGPointMake(xp, yp)]];
startRadians -= incrementRadians;
}
return pts;
}

Getting Pixel value in the image

I am calculating the RGB values of pixels in my captured photo. I have this code
func getPixelColorAtLocation(context: CGContext, point: CGPoint) -> Color {
self.context = createARGBBitmapContext(imgView.image!)
let data = CGBitmapContextGetData(context)
let dataType = UnsafePointer<UInt8>(data)
let offset = 4 * ((Int(imageHeight) * Int(point.x)) + Int(point.y))
var color = Color()
color.blue = dataType[offset]
color.green = dataType[offset + 1]
color.red = dataType[offset + 2]
color.alpha = dataType[offset + 3]
color.point.x = point.x
color.point.y = point.y
But I am not sure what this line means in the code.
let offset = 4 * ((Int(imageHeight) * Int(point.x)) + Int(point.y))
Any help??
Thanks in advance
Image is the set of pixels. In order to get the pixel at (x,y) point, you need to calculate the offset for that set.
If you use dataType[0], it has no offset 'cos it points to the place where the pointer is. If you used dataType[10], it would mean you took 10-th element from the beginning where the pointer is.
Due to the fact, we have RGBA colour model, you should multiply by 4, then you need to get what offset by x (it will be x), and by y (it will be the width of the image multiplied by y, in order to get the necessary column) or:
offset = x + width * y
// offset, offset + 1, offset + 2, offset + 3 <- necessary values for you
Imagine, like you have a long array with values in it.
It will be clear if you imagine the implementation of two-dimensional array in the form of one-dimensional array. It would help you, I hope.

Graphing Polar Functions with UIBezierPath

Is it possible to graph a polar function with UIBezierPath? More than just circles, I'm talking about cardioids, limacons, lemniscates, etc. Basically I have a single UIView, and want to draw the shape in the view.
There are no built in methods for shapes like that, but you can always approximate them with a series of very short straight lines. I've had reason to approximate a circle this way, and a circle with ~100 straight lines looks identical to a circle drawn with ovalInRect. It was easiest when doing this, to create the points in polar coordinates first, then convert those in a loop to rectangular coordinates before passing the points array to a method where I add the lines to a bezier path.
Here's my swift helper function (fully commented) that generates the (x,y) coordinates in a given CGRect from a polar coordinate function.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, thetaCoefficientDenominator:Double, cosScalar:Double, iPrecision:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// thetaCoefficientDenominator: The denominator of the thetaCoefficient
// cosScalar: The number to multiply the cos by.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: 2*Double.pi * thetaCoefficientDenominator, by: precision) { // Try to recreate continuity
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
let scaled_x = (Double(frame.width) - 0)/(cosScalar*2)*(x-cosScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(cosScalar*2)*(y-cosScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
return points
}
Given those points here's an example of how you would draw a UIBezierPath. In my example, this is in a custom UIView function I would call UIPolarCurveView.
let flowerPath = UIBezierPath() // Declare my path
// Custom Polar scalars
let k: Double = 9/4
let length = 50
// Draw path
let points = cartesianCoordsForPolarFunc(frame: frame, thetaCoefficient: k, thetaCoefficientDenominator:4 cosScalar: length, iPrecision: 0.01) flowerPath.move(to: points[0])
for i in 2...points.count {
flowerPath.addLine(to: points[i-1])
}
flowerPath.close()
Here's the result:
PS: If you plan on having multiple graphs in the same frame, make sure to modify the scaling addition by making the second cosScalar the largest of the cosScalars used.You can do this by adding an argument to the function in the example.

Resources