Getting Pixel value in the image - ios

I am calculating the RGB values of pixels in my captured photo. I have this code
func getPixelColorAtLocation(context: CGContext, point: CGPoint) -> Color {
self.context = createARGBBitmapContext(imgView.image!)
let data = CGBitmapContextGetData(context)
let dataType = UnsafePointer<UInt8>(data)
let offset = 4 * ((Int(imageHeight) * Int(point.x)) + Int(point.y))
var color = Color()
color.blue = dataType[offset]
color.green = dataType[offset + 1]
color.red = dataType[offset + 2]
color.alpha = dataType[offset + 3]
color.point.x = point.x
color.point.y = point.y
But I am not sure what this line means in the code.
let offset = 4 * ((Int(imageHeight) * Int(point.x)) + Int(point.y))
Any help??
Thanks in advance

Image is the set of pixels. In order to get the pixel at (x,y) point, you need to calculate the offset for that set.
If you use dataType[0], it has no offset 'cos it points to the place where the pointer is. If you used dataType[10], it would mean you took 10-th element from the beginning where the pointer is.
Due to the fact, we have RGBA colour model, you should multiply by 4, then you need to get what offset by x (it will be x), and by y (it will be the width of the image multiplied by y, in order to get the necessary column) or:
offset = x + width * y
// offset, offset + 1, offset + 2, offset + 3 <- necessary values for you
Imagine, like you have a long array with values in it.
It will be clear if you imagine the implementation of two-dimensional array in the form of one-dimensional array. It would help you, I hope.

Related

How do we do rectilinear image conversion with swift and iOS 11+

How do we use the function apple provides (below) to perform rectilinear conversion?
Apple provides a reference implementation in 'AVCameraCalibrationData.h' on how to correct images for lens distortion. Ie going from images taken with a wide-angle or telephoto lens to the rectilinear 'real world' image. A pictoral representation is here:
To create a rectilinear image we must begin with an empty destination buffer and iterate through it row by row, calling the sample implementation below for each point in the output image, passing the lensDistortionLookupTable to find the corresponding value in the distorted image, and write it to your output buffer.
func lensDistortionPoint(for point: CGPoint, lookupTable: Data, distortionOpticalCenter opticalCenter: CGPoint, imageSize: CGSize) -> CGPoint {
// The lookup table holds the relative radial magnification for n linearly spaced radii.
// The first position corresponds to radius = 0
// The last position corresponds to the largest radius found in the image.
// Determine the maximum radius.
let delta_ocx_max = Float(max(opticalCenter.x, imageSize.width - opticalCenter.x))
let delta_ocy_max = Float(max(opticalCenter.y, imageSize.height - opticalCenter.y))
let r_max = sqrt(delta_ocx_max * delta_ocx_max + delta_ocy_max * delta_ocy_max)
// Determine the vector from the optical center to the given point.
let v_point_x = Float(point.x - opticalCenter.x)
let v_point_y = Float(point.y - opticalCenter.y)
// Determine the radius of the given point.
let r_point = sqrt(v_point_x * v_point_x + v_point_y * v_point_y)
// Look up the relative radial magnification to apply in the provided lookup table
let magnification: Float = lookupTable.withUnsafeBytes { (lookupTableValues: UnsafePointer<Float>) in
let lookupTableCount = lookupTable.count / MemoryLayout<Float>.size
if r_point < r_max {
// Linear interpolation
let val = r_point * Float(lookupTableCount - 1) / r_max
let idx = Int(val)
let frac = val - Float(idx)
let mag_1 = lookupTableValues[idx]
let mag_2 = lookupTableValues[idx + 1]
return (1.0 - frac) * mag_1 + frac * mag_2
} else {
return lookupTableValues[lookupTableCount - 1]
}
}
// Apply radial magnification
let new_v_point_x = v_point_x + magnification * v_point_x
let new_v_point_y = v_point_y + magnification * v_point_y
// Construct output
return CGPoint(x: opticalCenter.x + CGFloat(new_v_point_x), y: opticalCenter.y + CGFloat(new_v_point_y))
}
Additionally apple states: "point", "opticalCenter", and "imageSize" parameters below must be in the same coordinate system.
With that in mind, what values do we pass for opticalCenter and imageSize and why? What exactly is the "applying radial magnification" doing?
The opticalCenter is actually named distortionOpticalCenter. So you can provide lensDistortionCenter from AVCameraCalibrationData.
Image size is a height and width of image you want to rectilinear.
"Applying radial magnification". It changes the coordinates of given point to the point where it will be with ideal lens without distortion.
"How do we use the function...". We should create an empty buffer with same size as the distorted image. For each pixel of empty buffer we should apply the lensDistortionPointForPoint function. And take a pixel with corrected coordinates from distorted image to empty buffer. After fill all buffer space you should get an undistorted image.

Polar coordinate point generation function upper bound is not 2Pi for theta?

So I wrote the following function to take a frame, and polar coordinate function and to graph it out by generating the cartesian coordinates within that frame. Here's the code.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, cosScalar:Double, iPrecision:Double, largestScalar:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// cosScalar: The number to multiply the cos by.
// largestScalar: Largest cosScalar used in this frame so that scaling is relative.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: Double.pi * 2 , by: precision) { //TODO: Try to recreate continuity. WHY IS IT NOT 2PI
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
// newvalue = (max'-min')/(max-min)*(value-max)+max'
let scaled_x = (Double(frame.width) - 0)/(largestScalar*2)*(x-largestScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(largestScalar*2)*(y-largestScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
print("Done points")
return points
}
The polar function I'm passing is r = 100*cos(9/4*theta) which looks like this.
I'm wondering why my function returns the following when theta goes from 0 to 2. (Please note I'm in this image I'm drawing different sizes flowers hence the repetition of the pattern)
As you can see it's wrong. Weird thing is that when theta goes from 0 to 2Pi*100 (Also works for other random values such as 2Pi*4, 2Pi*20 but not 2Pi*2 or 2Pi*10)it works and I get this.
Why is this? Is the domain not 0 to 2Pi? I noticed that when going to 2Pi*100 it redraws some petals so there is a limit, but what is it?
PS: Precision here is 0.01 (enough to act like it's continuous). In my images I'm drawing the function in different sizes and overlapping (last image has 2 inner flowers).
No, the domain is not going to be 2π. Set up your code to draw slowly, taking 2 seconds for each 2π, and watch. It makes a whole series of full circles, and each time the local maxima and minima land at different points. That's what your petals are. It looks like your formula repeats after 8π.
It looks like the period is the denominator of the theta coefficient * 2π. Your theta coefficient is 9/4, the denominator is 4, so the coefficient is 4*2π, or 8π.
(That is based on playing in Wolfram Alpha and observing the results. I may be wrong.)

Graphing Polar Functions with UIBezierPath

Is it possible to graph a polar function with UIBezierPath? More than just circles, I'm talking about cardioids, limacons, lemniscates, etc. Basically I have a single UIView, and want to draw the shape in the view.
There are no built in methods for shapes like that, but you can always approximate them with a series of very short straight lines. I've had reason to approximate a circle this way, and a circle with ~100 straight lines looks identical to a circle drawn with ovalInRect. It was easiest when doing this, to create the points in polar coordinates first, then convert those in a loop to rectangular coordinates before passing the points array to a method where I add the lines to a bezier path.
Here's my swift helper function (fully commented) that generates the (x,y) coordinates in a given CGRect from a polar coordinate function.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, thetaCoefficientDenominator:Double, cosScalar:Double, iPrecision:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// thetaCoefficientDenominator: The denominator of the thetaCoefficient
// cosScalar: The number to multiply the cos by.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: 2*Double.pi * thetaCoefficientDenominator, by: precision) { // Try to recreate continuity
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
let scaled_x = (Double(frame.width) - 0)/(cosScalar*2)*(x-cosScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(cosScalar*2)*(y-cosScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
return points
}
Given those points here's an example of how you would draw a UIBezierPath. In my example, this is in a custom UIView function I would call UIPolarCurveView.
let flowerPath = UIBezierPath() // Declare my path
// Custom Polar scalars
let k: Double = 9/4
let length = 50
// Draw path
let points = cartesianCoordsForPolarFunc(frame: frame, thetaCoefficient: k, thetaCoefficientDenominator:4 cosScalar: length, iPrecision: 0.01) flowerPath.move(to: points[0])
for i in 2...points.count {
flowerPath.addLine(to: points[i-1])
}
flowerPath.close()
Here's the result:
PS: If you plan on having multiple graphs in the same frame, make sure to modify the scaling addition by making the second cosScalar the largest of the cosScalars used.You can do this by adding an argument to the function in the example.

Calculate vertices for n sided regular polygon

I have tried to follow this answer
It works fine for creating the polygons, however I can see that it doesn't reach the edges of the containing rectangle.
The following gif shows what I mean. Especially for the 5 sided polygon it is clear that it doesn't "span" the rectangle which I would like it to do
This is the code I use for creating the vertices
func verticesForEdges(_edges: Int) -> [CGPoint] {
let offset = 1.0
var vertices: [CGPoint] = []
for i in 0..._edges {
let angle = M_PI + 2.0 * M_PI * Double(i) / Double(edges)
var x = (frame.width / 2.0) * CGFloat(sin(angle)) + (bounds.width / 2.0)
var y = (frame.height / 2.0) * CGFloat(cos(angle)) + (bounds.height / 2.0)
vertices.append(CGPoint(x: x, y: y))
}
return vertices
}
And this is the code that uses the the vertices
override func layoutSublayers() {
super.layoutSublayers()
var polygonPath = UIBezierPath()
let vertices = verticesForEdges(edges)
polygonPath.moveToPoint(vertices[0])
for v in vertices {
polygonPath.addLineToPoint(v)
}
polygonPath.closePath()
self.path = polygonPath.CGPath
}
So the question is. How do I make the the polygons fill out the rectangle
Update:
The rectangle is not necessarily a square. It can have a different height from its width. From the comments it seems that I am fitting the polygon in a circle, but what is intentioned is to fit it in a rectangle.
If the first (i=0) vertice is fixed at the middle of top rectangle edge, we can calculate minimal width and height of bounding rectangle:
The rightmost vertice index
ir = (N + 2) / 4 // N/4, rounded to the closest integer, not applicable to triangle
MinWidth = 2 * R * Sin(ir * 2 * Pi / N)
The bottom vertice index
ib = (N + 1) / 2 // N/2, rounded to the closest integer
MinHeight = R * (1 + Abs(Cos(ib * 2 * Pi / N)))
So for given rectangle dimensions we can calculate R parameter to inscribe polygon properly

Bitmap image manipulation

I want to replace GetPixel and SetPixel using LockBits method, so I came across this F# lazy pixels reading
open System.Drawing
open System.Drawing.Imaging
let pixels (image:Bitmap) =
let Width = image.Width
let Height = image.Height
let rect = new Rectangle(0,0,Width,Height)
// Lock the image for access
let data = image.LockBits(rect, ImageLockMode.ReadOnly, image.PixelFormat)
// Copy the data
let ptr = data.Scan0
let stride = data.Stride
let bytes = stride * data.Height
let values : byte[] = Array.zeroCreate bytes
System.Runtime.InteropServices.Marshal.Copy(ptr,values,0,bytes)
// Unlock the image
image.UnlockBits(data)
let pixelSize = 4 // <-- calculate this from the PixelFormat
// Create and return a 3D-array with the copied data
Array3D.init 3 Width Height (fun i x y ->
values.[stride * y + x * pixelSize + i])
At the end of the code, it returns a 3D array with the copied data.
So the 3D array is a copied image, how do I edit the pixels of the 3D array such as changing color? What is the pixelSize for? Why store an image in 3D byte array not 2D?
Example if we want to use 2D array instead, and I want to change the colors of specified pixels, how do we go about doing that?
Do we do operations on the given copied image in bytearray OUTSIDE pixels function OR we do it INSIDE the pixels function before unlocking the image?
If we no longer use GetPixel or SetPixel? How do I retrieve color of the pixels from the copied image byte[]?
If you don't understand my questions, please do explain how do I use above code to do opeation such as "add 50" to R,G,B of every pixel of a given image, without getPixel, setPixel
The first component of the 3D array is the colour component. So at index 1,78,218 is the value of the blue component of the pixel at 78,218.
Like this:
Array2D.init Width Height (fun x y ->
let color i = values.[stride * y + x * pixelSize + i] |> int
new Color(color 0, color 1, color 2)
Since the images is copied, it doesn't make a difference if you mess with it before or after unlocking the image. The locking is there to make sure nobody changes the image while you do the actual copying.
The values array is a flattening of a 2D array into a flat array. The 2D-index .[x,y] is at stride * y + x * pixelSize. The RGB components then have a byte each. This explains why this finds the i'th color component at x,y:
values.[stride * y + x * pixelSize + i] |> int
To add 50 to every pixel, its easier to use the original 3D array. Suppose you have an image myImage:
pixels (myImage) |> Array3D.map ((+) 50)
The type of this is Array3D<Color>, not Image. If you need the an Image, you'll need to construct that, somehow, from the Array3D you now have.

Resources