Here i have some json data like bone = 24. based on this json value i have to change some small dots to big dots as i show in my below image.
Like my above image . (please consider image 1). it have 10 big dots. And based on that bone.label value i have to change some big dots to small dots.
My doubt is:
How to set that dot image using (uiimage or uibutton). my idea is to set 10 uiimage and then using if statement condition i will change big dots to small dots. but if i do in this way i am not able to set constraints. is they any other idea to dots in above my cirecle??
Do it in code, not in Interface builder, and without constraints. I doubt that working with circular relations is working well with autolayout constraints.
If you position the dot images with their center instead of the frame, their location is also not changing with their size.
Use the same size for all of your UIImageViews and then just change the image to a small or large dot. Set the content mode of the UIImageViews to center, so that the images won't be scaled to fit the size of the UIImageView.
UPDATE
You could solve the positioning of the dots by drawing them in a UIView instead of using constraints. Here's a function that returns an array with CGPoints evenly distributed around a circle, which you could use when drawing the dots:
func generatePoints(totalPoints: Int, center: CGPoint, radius: Double) -> [CGPoint] {
let arc: Double = 360
let startAngle: Double = 180
let mpi: Double = M_PI/180
var startRadians: Double = startAngle * mpi
let incrementAngle: Double = arc/Double(totalPoints)
let incrementRadians = incrementAngle * mpi
var points = [CGPoint]()
var currentPoint = totalPoints
while currentPoint > 0 {
currentPoint--
let xp = CGFloat(ceil(Double(center.x) + sin(startRadians) * radius))
let yp = CGFloat(ceil(Double(center.y) + cos(startRadians) * radius))
points.append(CGPointMake(xp, yp))
startRadians -= incrementRadians
}
return points;
}
Objective-C version (somewhat ugly, because it's from an ooooooold project of mine):
+ (NSArray*)generatePoints:(NSInteger)totalPoints center:(CGPoint)center radius:(NSInteger)radius {
float circleradius = (float)radius;
const float arc = 360.f;
float startAngle = 180.f;
float mpi = M_PI/180.f;
float startRadians = startAngle * mpi;
float incrementAngle = arc/(float)totalPoints;
float incrementRadians = incrementAngle * mpi;
NSMutableArray *pts = [[NSMutableArray alloc] initWithCapacity:totalPoints];
while (totalPoints--) {
float xp = ceilf(center.x + sinf(startRadians) * circleradius);
float yp = ceilf(center.y + cosf(startRadians) * circleradius);
[pts addObject:[NSValue valueWithCGPoint:CGPointMake(xp, yp)]];
startRadians -= incrementRadians;
}
return pts;
}
Related
How do we use the function apple provides (below) to perform rectilinear conversion?
Apple provides a reference implementation in 'AVCameraCalibrationData.h' on how to correct images for lens distortion. Ie going from images taken with a wide-angle or telephoto lens to the rectilinear 'real world' image. A pictoral representation is here:
To create a rectilinear image we must begin with an empty destination buffer and iterate through it row by row, calling the sample implementation below for each point in the output image, passing the lensDistortionLookupTable to find the corresponding value in the distorted image, and write it to your output buffer.
func lensDistortionPoint(for point: CGPoint, lookupTable: Data, distortionOpticalCenter opticalCenter: CGPoint, imageSize: CGSize) -> CGPoint {
// The lookup table holds the relative radial magnification for n linearly spaced radii.
// The first position corresponds to radius = 0
// The last position corresponds to the largest radius found in the image.
// Determine the maximum radius.
let delta_ocx_max = Float(max(opticalCenter.x, imageSize.width - opticalCenter.x))
let delta_ocy_max = Float(max(opticalCenter.y, imageSize.height - opticalCenter.y))
let r_max = sqrt(delta_ocx_max * delta_ocx_max + delta_ocy_max * delta_ocy_max)
// Determine the vector from the optical center to the given point.
let v_point_x = Float(point.x - opticalCenter.x)
let v_point_y = Float(point.y - opticalCenter.y)
// Determine the radius of the given point.
let r_point = sqrt(v_point_x * v_point_x + v_point_y * v_point_y)
// Look up the relative radial magnification to apply in the provided lookup table
let magnification: Float = lookupTable.withUnsafeBytes { (lookupTableValues: UnsafePointer<Float>) in
let lookupTableCount = lookupTable.count / MemoryLayout<Float>.size
if r_point < r_max {
// Linear interpolation
let val = r_point * Float(lookupTableCount - 1) / r_max
let idx = Int(val)
let frac = val - Float(idx)
let mag_1 = lookupTableValues[idx]
let mag_2 = lookupTableValues[idx + 1]
return (1.0 - frac) * mag_1 + frac * mag_2
} else {
return lookupTableValues[lookupTableCount - 1]
}
}
// Apply radial magnification
let new_v_point_x = v_point_x + magnification * v_point_x
let new_v_point_y = v_point_y + magnification * v_point_y
// Construct output
return CGPoint(x: opticalCenter.x + CGFloat(new_v_point_x), y: opticalCenter.y + CGFloat(new_v_point_y))
}
Additionally apple states: "point", "opticalCenter", and "imageSize" parameters below must be in the same coordinate system.
With that in mind, what values do we pass for opticalCenter and imageSize and why? What exactly is the "applying radial magnification" doing?
The opticalCenter is actually named distortionOpticalCenter. So you can provide lensDistortionCenter from AVCameraCalibrationData.
Image size is a height and width of image you want to rectilinear.
"Applying radial magnification". It changes the coordinates of given point to the point where it will be with ideal lens without distortion.
"How do we use the function...". We should create an empty buffer with same size as the distorted image. For each pixel of empty buffer we should apply the lensDistortionPointForPoint function. And take a pixel with corrected coordinates from distorted image to empty buffer. After fill all buffer space you should get an undistorted image.
I am using on device image recognition from Catchoom CraftAR and working with the example available on Github https://github.com/Catchoom/craftar-example-ios-on-device-image-recognition.
The image recognition works, I would like to use the matchBoundingBox to draw some squares on all the 4 corners. Somehow the calculations I am doing are not working, I have based them on this article:
http://support.catchoom.com/customer/portal/articles/1886553-obtain-the-bounding-boxes-of-the-results-of-image-recognition
The square views are added to the scanning overlay and this is how I am calculating the points where to add the 4 views:
CraftARSearchResult *bestResult = [results objectAtIndex:0];
BoundingBox *box = bestResult.matchBoundingBox;
float w = self._preview.frame.size.width;
float h = self._preview.frame.size.height;
CGPoint tr = CGPointMake(w * box.topRightX , h * box.topRightY);
CGPoint tl = CGPointMake(w * box.topLeftX, h * box.topLeftY);
CGPoint br = CGPointMake(w * box.bottomRightX, h * box.bottomRightY);
CGPoint bl = CGPointMake(w * box.bottomLeftX, h * box.bottomLeftY);
The x position looks like it is pretty close, but the y position is completely off and looks like mirrored.
I am testing on iOS 10 iPhone 6s
Am I missing something?
The issue was that I was using the preview frame to make the translation to the points in screen. But the points that come through with bounding box are not relative to the preview view, they are relative to the VideoFrame (as the support people of catchoom.com pointed out). The VideoFrame size is set by the capturePreset which only accepts two values AVCaptureSessionPreset1280x720 and AVCaptureSessionPreset640x480. The default one is AVCaptureSessionPreset1280x720
So in my case I had to make the calculations with size 1280x720 and then make the conversion from those coordinates to the coordinates in my preview view size.
So it ended up looking like this:
let box = bestResult.matchBoundingBox
let wVideoFrame:CGFloat = 1080.0;
let hVideoFrame:CGFloat = 720.0;
let wRelativePreview = wVideoFrame/CGFloat(preview.frame.size.height)
let hRelativePreview = wVideoFrame/CGFloat(preview.frame.size.width)
var tl = CGPoint(x: wVideoFrame * CGFloat(box.topLeftX),y: hVideoFrame * CGFloat(box.topLeftY));
var tr = CGPoint(x: wVideoFrame * CGFloat(box.topRightX) ,y: hVideoFrame * CGFloat(box.topRightY));
var br = CGPoint(x: wVideoFrame * CGFloat(box.bottomRightX),y: hVideoFrame * CGFloat(box.bottomRightY));
var bl = CGPoint(x: wVideoFrame * CGFloat(box.bottomLeftX),y: hVideoFrame * CGFloat(box.bottomLeftY));
tl = CGPoint(x: tl.x/wRelativePreview, y: tl.y/hRelativePreview)
tr = CGPoint(x: tr.x/wRelativePreview, y: tr.y/hRelativePreview)
br = CGPoint(x: br.x/wRelativePreview, y: br.y/hRelativePreview)
bl = CGPoint(x: bl.x/wRelativePreview, y: bl.y/hRelativePreview)
// 4 square visualize top-left, top.right, bottom-left and bottom-right points
var fr = vTL.frame;
fr.origin = tl;
vTL.frame = fr;
fr.origin = tr;
vTR.frame = fr;
fr.origin = br;
vBR.frame = fr;
fr.origin = bl;
vBL.frame = fr;
Now the points looked quite ok on screen, but they looked some how rotated. So I rotated the view 90 degrees:
// overlay is the container of the 3 squares to visualize the points in screen
overlay.transform = CGAffineTransform(rotationAngle: CGFloat(M_PI/2.0))
Note this is not the official response from support from catchoom, this might not be 100% correct, but it worked for me quite well.
I have an app with a color wheel and I'm trying to pick a random color within the color wheel. However, I'm having problems verifying that the random point falls within the color wheel.
Here's the code as it currently is:
CGPoint randomPoint = CGPointMake(arc4random() % (int)colorWheel.bounds.size.width, arc4random() % (int)colorWheel.bounds.size.height);
UIColor *randomColor = [self colorOfPoint:randomPoint];
CGPoint pointInView = [colorWheel convertPoint:randomPoint fromView:colorWheel.window];
if (CGRectContainsPoint(colorWheel.bounds, pointInView)) {
NSLog(#"%#", randomColor);
}
else {
NSLog(#"out of bounds");
}
A couple of other methods of verifying the point that I've tried with no luck:
if (CGRectContainsPoint(colorWheel.frame, randomPoint)) {
NSLog(#"%#", randomColor);
}
if ([colorWheel pointInside:[self.view convertPoint:randomPoint toView: colorWheel] withEvent: nil]) {
NSLog(#"%#", randomColor);
}
Sometimes it'll output "out of bounds", and sometimes it'll just output that the color is white (the background around the color wheel is currently white but there's no white in the color wheel image).
The color wheel image is a circle, so I'm not sure if that's throwing off the test, although it seems like white pops up way too frequently for it to just be a transparent square outline around the image giving a white color.
If you want to generate a random point in a circle, you would do better to pick your point in polar coordinates and then convert it to Cartesian.
The polar coordinate space uses two dimesions, radius and angle. Radius is just the distance from the center, and angle usually starts at "due east" for 0, and goes around counter-clockwise up to 2π (that's in radians, 360˚ of course in degrees).
Presumably your wheel is divided into simple wedges, so the radius actually doesn't matter; you just need to pick a random angle.
uint32_t angle = arc4random_uniform(360);
// Radius will just be halfway from the center to the edge.
// This assumes the circle is exactly enclosed, i.e., diameter == width
CGFloat radius = colorWheel.bounds.size.width / 4;
This function will give you a Cartesian point from your polar coordinates. Wikipedia explains the simple math if you're interested.
/** Convert the polar point (radius, theta) to a Cartesian (x,y). */
CGPoint poltocar(CGFloat radius, CGFloat theta)
{
return (CGPoint){radius * cos(theta), radius * sin(theta)};
}
The function uses radians for theta, because sin() and cos() do, so change the angle to radians, and then you can convert:
CGFloat theta = (angle * M_PI) / 180.0
CGPoint randomPoint = poltocar(radius, theta);
One last step: this circle has its origin at the same place as the view, that is, in the corner, so you need to translate the point to use the center as the origin.
CGPoint addPoints(CGPoint lhs, CGPoint rhs)
{
return (CGPoint){lhs.x + rhs.x, lhs.y, rhs.y};
}
CGPoint offset = (CGPoint){colorWheel.bounds.size.width / 2,
colorWheel.bounds.size.height / 2};
randomPoint = addPoints(randomPoint, offset);
And your new randomPoint will always be within the circle.
I agree with #JoshCaswell's approach, but FYI, the reason the OP code is not working is that the test for inside a circle is incorrect.
The coordinate conversion is unnecessary, and the test against a rectangle is sure to be wrong. Instead, work out how far the random point is from the center and compare that with the radius.
CGFloat centerX = colorWheel.bounds.size.width / 2.0;
CGFloat centerY = colorWheel.bounds.size.height / 2.0;
CGFloat distanceX = centerX - randomPoint.x;
CGFloat distanceY = centerY - randomPoint.y;
CGFloat distance = distanceX*distanceX + distanceY*distanceY;
CGFloat radius = colorWheel.bounds.size.width / 2.0; // just a guess
CGFloat r2 = radius*radius;
// this compares the square of the distance with r^2, to save a sqrt operation
BOOL isInCircle = distance < r2;
I have a UIView overlayed on a map, and I'm drawing some graphics in screen space between two of the coordinates using
- (CGPoint)convertCoordinate:(CLLocationCoordinate2D)coordinate toPointToView:(UIView *)view
The problem is that when the map is very zoomed in and tilted (3D-like), the pixel position of the coordinate that is way off-screen stops being consistent. Sometimes the function returns NaN, sometimes it returns the right number and others it jumps to the other side of the screen.
Not sure how can I explain it better. Has anyone run into this?
During research have find a many solution. Any solution might be work for you.
Solution:1
int x = (int) ((MAP_WIDTH/360.0) * (180 + lon));
int y = (int) ((MAP_HEIGHT/180.0) * (90 - lat));
Solution:2
func addLocation(coordinate: CLLocationCoordinate2D)
{
// max MKMapPoint values
let maxY = Double(267995781)
let maxX = Double(268435456)
let mapPoint = MKMapPointForCoordinate(coordinate)
let normalizatePointX = CGFloat(mapPoint.x / maxX)
let normalizatePointY = CGFloat(mapPoint.y / maxY)
print(normalizatePointX)
print(normalizatePointX)
}
Solutuin:3
x = (total width of image in px) * (180 + latitude) / 360
y = (total height of image in px) * (90 - longitude) / 180
note: when using negative longitude of latitude make sure to add or subtract the negative number i.e. +(-92) or -(-35) which would actually be -92 and +35
This is for an iPad application, but it is essentially a math question.
I need to draw a circular arc of varying (monotonically increasing) line width. At the beginning of the curve, it would have a starting thickness (let's say 2pts) and then the thickness would smoothly increase until the end of the arc where it would be at its greatest thickness (let's say 12pts).
I figure the best way to make this is by creating a UIBezierPath and filling the shape. My first attempt was to use two circular arcs (with offset centers), and that worked fine up to 90°, but the arc will often be between 90° and 180°, so that approach won't cut it.
My current approach is to make a slight spiral (one slightly growing from the circular arc and one slightly shrinking) using bezier quad or cubic curves. The question is where do I put the control points so that the deviation from the circular arc (aka the shape "thickness") is the value I want.
Constraints:
The shape must be able to start and end at an arbitrary angle (within 180° of each other)
The "thickness" of the shape (deviation from the circle) must start and end with the given values
The "thickness" must increase monotonically (it can't get bigger and then smaller again)
It has to look smooth to the eye, there can't be any sharp bends
I am open to other solutions as well.
My approach just constructs 2 circular arcs and fills the region in between. The tricky bit is figuring out the centers and radii of these arcs. Looks quite good provided the thicknesses are not too large. (Cut and paste and decide for yourself if it meet your needs.) Could possibly be improved by use of a clipping path.
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGMutablePathRef path = CGPathCreateMutable();
// As appropriate for iOS, the code below assumes a coordinate system with
// the x-axis pointing to the right and the y-axis pointing down (flipped from the standard Cartesian convention).
// Therefore, 0 degrees = East, 90 degrees = South, 180 degrees = West,
// -90 degrees = 270 degrees = North (once again, flipped from the standard Cartesian convention).
CGFloat startingAngle = 90.0; // South
CGFloat endingAngle = -45.0; // North-East
BOOL weGoFromTheStartingAngleToTheEndingAngleInACounterClockwiseDirection = YES; // change this to NO if necessary
CGFloat startingThickness = 2.0;
CGFloat endingThickness = 12.0;
CGPoint center = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
CGFloat meanRadius = 0.9 * fminf(self.bounds.size.width / 2.0, self.bounds.size.height / 2.0);
// the parameters above should be supplied by the user
// the parameters below are derived from the parameters supplied above
CGFloat deltaAngle = fabsf(endingAngle - startingAngle);
// projectedEndingThickness is the ending thickness we would have if the two arcs
// subtended an angle of 180 degrees at their respective centers instead of deltaAngle
CGFloat projectedEndingThickness = startingThickness + (endingThickness - startingThickness) * (180.0 / deltaAngle);
CGFloat centerOffset = (projectedEndingThickness - startingThickness) / 4.0;
CGPoint centerForInnerArc = CGPointMake(center.x + centerOffset * cos(startingAngle * M_PI / 180.0),
center.y + centerOffset * sin(startingAngle * M_PI / 180.0));
CGPoint centerForOuterArc = CGPointMake(center.x - centerOffset * cos(startingAngle * M_PI / 180.0),
center.y - centerOffset * sin(startingAngle * M_PI / 180.0));
CGFloat radiusForInnerArc = meanRadius - (startingThickness + projectedEndingThickness) / 4.0;
CGFloat radiusForOuterArc = meanRadius + (startingThickness + projectedEndingThickness) / 4.0;
CGPathAddArc(path,
NULL,
centerForInnerArc.x,
centerForInnerArc.y,
radiusForInnerArc,
endingAngle * (M_PI / 180.0),
startingAngle * (M_PI / 180.0),
!weGoFromTheStartingAngleToTheEndingAngleInACounterClockwiseDirection
);
CGPathAddArc(path,
NULL,
centerForOuterArc.x,
centerForOuterArc.y,
radiusForOuterArc,
startingAngle * (M_PI / 180.0),
endingAngle * (M_PI / 180.0),
weGoFromTheStartingAngleToTheEndingAngleInACounterClockwiseDirection
);
CGContextAddPath(context, path);
CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor);
CGContextFillPath(context);
CGPathRelease(path);
}
One solution could be to generate a polyline manually. This is simple but it has the disadvantage that you'd have to scale up the amount of points you generate if the control is displayed at high resolution. I don't know enough about iOS to give you iOS/ObjC sample code, but here's some python-ish pseudocode:
# lower: the starting angle
# upper: the ending angle
# radius: the radius of the circle
# we'll fill these with polar coordinates and transform later
innerSidePoints = []
outerSidePoints = []
widthStep = maxWidth / (upper - lower)
width = 0
# could use a finer step if needed
for angle in range(lower, upper):
innerSidePoints.append(angle, radius - (width / 2))
outerSidePoints.append(angle, radius + (width / 2))
width += widthStep
# now we have to flip one of the arrays and join them to make
# a continuous path. We could have built one of the arrays backwards
# from the beginning to avoid this.
outerSidePoints.reverse()
allPoints = innerSidePoints + outerSidePoints # array concatenation
xyPoints = polarToRectangular(allPoints) # if needed
A view with a spiral .. 2023
It's very easy to draw a spiral mathematically and there are plenty of examples around.
https://github.com/mabdulsubhan/UIBezierPath-Spiral/blob/master/UIBezierPath%2BSpiral.swift
Put it in a view in the obvious way:
class Example: UIView {
private lazy var spiral: CAShapeLayer = {
let s = CAShapeLayer()
s.strokeColor = UIColor.systemPurple.cgColor
s.fillColor = UIColor.clear.cgColor
s.lineWidth = 12.0
s.lineCap = .round
layer.addSublayer(s)
return s
}()
private lazy var sp: CGPath = {
let s = UIBezierPath.getSpiralPath(
center: bounds.centerOfCGRect(),
startRadius: 0,
spacePerLoop: 4,
startTheta: 0,
endTheta: CGFloat.pi * 2 * 5,
thetaStep: 10.radians)
return s.cgPath
}()
override func layoutSubviews() {
super.layoutSubviews()
clipsToBounds = true
spiral.path = sp
}
}