I want to detect the teeth from an image and want to whiten them with slider.
I found the following code for mouth detection. But how should I detect the exact teeth location so that I could whiten them. Is there any third party for this?
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage];
// create a face detector - since speed is not an issue now we'll use a high accuracy detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -imageView.bounds.size.height);
for(CIFaceFeature* faceFeature in features)
{
// Get the face rect: Translate CoreImage coordinates to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceRect];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// add the new view to create a box around the face
[imageView addSubview:faceView];
if(faceFeature.hasMouthPosition)
{
// Get the mouth position translated to imageView UIKit coordinates
const CGPoint mouthPos = CGPointApplyAffineTransform(faceFeature.mouthPosition, transform);
// Create an UIView to represent the mouth, its size depend on the width of the face.
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5,
mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5,
faceWidth*MOUTH_SIZE_RATE,
faceWidth*MOUTH_SIZE_RATE)];
// make the mouth look nice and add it to the view
mouth.backgroundColor = [[UIColor greenColor] colorWithAlphaComponent:0.3];
//mouth.layer.cornerRadius = faceWidth*MOUTH_SIZE_RATE*0.5;
[imageView addSubview:mouth];
NSLog(#"Mouth %g %g", faceFeature.mouthPosition.x, faceFeature.mouthPosition.y);
}
}
Related
The task is to create an image that consists of gradient of a lot of points that represent different colors. For example it is not a problem to draw a gradient between to points, but near the line of these two points can be another point that have an effect changing gradient between these two points. Appreciate any help.
You'll probably want to look into Core Graphics... I have posted two examples below of Core Graphics gradient rendering.
1D Linear gradient with multiple colors
This method will create an image with a simple 1D linear gradient (from top to bottom) with a given array of colors, all evenly spaced. You can customise your own spacings by setting your own values in gradLocs[].
-(UIImage*) gradientImageWithSize:(CGSize)size withColors:(NSArray*)colors {
// Start context
UIGraphicsBeginImageContext(size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger colorCount = [colors count];
CGFloat gradLocs[colorCount];
for (int i = 0; i < colorCount; i++) gradLocs[i] = i/colorCount; // Even spacing of colors.
// Create a simple linear gradient with the colors provided.
CGGradientRef grad = CGGradientCreateWithColors(colorSpace, (__bridge CFArrayRef)colors, gradLocs);
CGColorSpaceRelease(colorSpace);
// Draw gradient with multiply blend mode over the source image
CGContextDrawLinearGradient(c, grad, CGPointZero, (CGPoint){0, size.height}, 0);
CGGradientRelease(grad);
// Grab resulting image from context
UIImage* resultImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImg;
}
Usage:
NSArray* colors = #[(id)[UIColor redColor].CGColor, (id)[UIColor greenColor].CGColor, (id)[UIColor purpleColor].CGColor, (id)[UIColor blueColor].CGColor];
[self gradientImageWithSize:(CGSize){500, 500} withColors:colors];
2D gradient with multiple colors
This method, while not a very accurate way of generating a 2D gradient, is definitely the easiest way through Core Graphics. The radius parameter defines how far away from the point that the color has influence. To do this I created a custom object to store the gradient information at a given point:
/// Defines a simple point for use in a gradient
#interface gradPoint : NSObject
/// The color at the given point
#property (nonatomic) UIColor* color;
/// The position of the point
#property (nonatomic) CGPoint point;
/// The radius of the point (how far the color will have influence)
#property (nonatomic) CGFloat radius;
#end
#implementation gradPoint
+(instancetype) pointWithColor:(UIColor*)color point:(CGPoint)point radius:(CGFloat)radius {
gradPoint* p = [[self alloc] init];
p.color = color;
p.point = point;
p.radius = radius;
return p;
}
#end
The gradient generation method then takes a size and an array of these gradPoint objects.
-(UIImage*) gradient2DImageWithSize:(CGSize)size gradPointArray:(NSArray*)gradPoints {
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(c, [UIColor whiteColor].CGColor);
CGContextFillRect(c, (CGRect){CGPointZero, size});
CGContextSetBlendMode(c, kCGBlendModeMultiply);
CGFloat gradLocs[] = {0, 1};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
for (gradPoint* point in gradPoints) {
NSArray* colors = #[(id)point.color.CGColor, (id)[UIColor whiteColor].CGColor];
CGGradientRef grad = CGGradientCreateWithColors(colorSpace, (__bridge CFArrayRef)colors, gradLocs);
CGContextDrawRadialGradient(c, grad, point.point, 0, point.point, point.radius, 0);
CGGradientRelease(grad);
}
CGColorSpaceRelease(colorSpace);
UIImage* i = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return i;
}
Usage:
CGFloat gradRadius = frame.size.height;
NSArray* gradPoints = #[
[gradPoint pointWithColor:[UIColor redColor] point:CGPointZero radius:gradRadius],
[gradPoint pointWithColor:[UIColor cyanColor] point:(CGPoint){0, frame.size.height} radius:gradRadius],
[gradPoint pointWithColor:[UIColor yellowColor] point:(CGPoint){frame.size.height, 0} radius:gradRadius],
[gradPoint pointWithColor:[UIColor greenColor] point:(CGPoint){frame.size.height, frame.size.height} radius:gradRadius]
];
UIImage* gradImage = [self gradient2DImageWithSize:(CGSize){gradRadius, gradRadius} gradPointArray:gradPoints];
This method works best with a square image, with the radius set to the height/width.
Example Output:
Our app contains several MKPolyline boundaries that all create a closed in polygon. These are primarily to display as an MKOverlay on a MKMapView but I'm looking for a solution to display these polygons as small thumbnails to be visible not on the MKMapView but instead as a standard UIImage or UIImageView.
Just to be clear, I'm wanting these small thumbnails would just be displayed as small shapes that have a stroke color and a fill color but without any map background.
Could anyone help me with this?
Here you go.
+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color{
// load the image
UIImage *img = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
Please check this original post for detail description.
I had to do exactly the same in my own app. Here is my solution : I generate a UIView which represents path's shape. In your case the path is a MKPolyline.
Here is my code :
+ (UIView *)createShapeForGPX:(GPX *)gpx
withFrameSize:(CGSize)frameSize
lineColor:(UIColor *)lineColor {
// Array of coordinates (Adapt this code with your coordinates)
// Note : in my case I have a double loops because points are in paths
// and I can have many paths for one route. So I concact all points
// into one array to simplify the code for your case. If you also have
// many paths, you have to change a little bit next code.
NSMutableArray<NSValue *> *dataPoints = [NSMutableArray new];
for (NSArray *path in gpx.paths) {
for (NSDictionary *point in path) {
double latitude = [point[#"latitude"] doubleValue];
double longitude = [point[#"longitude"] doubleValue];
[dataPoints addObject:[NSValue valueWithCGPoint:CGPointMake(longitude, latitude)]];
}
}
// Graph bounds (You need to calculate topRightCoordinate and bottomleftCoordinate. You can do it in previous for loop)
double lngBorder = gpx.topRightCoordinate.longitude - gpx.bottomLeftCoordinate.longitude;
double latBorder = gpx.topRightCoordinate.latitude - gpx.bottomLeftCoordinate.latitude;
double middleLng = gpx.bottomLeftCoordinate.longitude + (lngBorder / 2.f);
double middleLat = gpx.bottomLeftCoordinate.latitude + (latBorder / 2.f);
double boundLength = MAX(lngBorder, latBorder);
// *** Drawing ***
CGFloat margin = 4.f;
UIView *graph = [UIView new];
graph.frame = CGRectMake(0, 0, frameSize.width - margin, frameSize.height - margin);
CAShapeLayer *line = [CAShapeLayer layer];
UIBezierPath *linePath = [UIBezierPath bezierPath];
float xAxisMin = middleLng - (boundLength / 2.f);
float xAxisMax = middleLng + (boundLength / 2.f);
float yAxisMin = middleLat - (boundLength / 2.f);
float yAxisMax = middleLat + (boundLength / 2.f);
int i = 0;
while (i < dataPoints.count) {
CGPoint point = [dataPoints[i] CGPointValue];
float xRatio = 1.0-((xAxisMax-point.x)/(xAxisMax-xAxisMin));
float yRatio = 1.0-((yAxisMax-point.y)/(yAxisMax-yAxisMin));
float x = xRatio*(frameSize.width - margin / 2);
float y = (1.0-yRatio)*(frameSize.height - margin);
if (i == 0) {
[linePath moveToPoint:CGPointMake(x, y)];
} else {
[linePath addLineToPoint:CGPointMake(x, y)];
}
i++;
}
// Line
line.lineWidth = 0.8;
line.path = linePath.CGPath;
line.fillColor = [[UIColor clearColor] CGColor];
line.strokeColor = [lineColor CGColor];
[graph.layer addSublayer:line];
graph.backgroundColor = [UIColor clearColor];
// Final view (add margins)
UIView *finalView = [UIView new];
finalView.backgroundColor = [UIColor clearColor];
finalView.frame = CGRectMake(0, 0, frameSize.width, frameSize.height);
graph.center = CGPointMake(CGRectGetMidX(finalView.bounds), CGRectGetMidY(finalView.bounds));
[finalView addSubview:graph];
return finalView;
}
In my case GPX class contains few values :
- NSArray<NSArray<NSDictionary *> *> *paths; : contains all points of all paths. In your case I think it is your MKPolyline.
- topRightCoordinate and bottomLeftCoordinate : Two CLLocationCoordinate2D that represent top right and bottom left virtual coordinates of my path (you have to calcul them also).
You call this method like that :
UIView *shape = [YOURCLASS createShapeForGPX:gpx withFrameSize:CGSizeMake(32, 32) lineColor:[UIColor blackColor]];
This solution is based on this question how to draw a line graph in ios? Any control which will help me show graph data in ios which gives a solution to draw a graph from points.
Maybe all this code is not usefull for you (like margins) but it should help you to find your own solution.
Here is how it displays in my app (in a UITableView) :
I am trying to crop a UIImage to a face that has been detected using the built-in CoreImage face detection functionality. I seem to be able to detect the face properly, but when I attempt to crop my UIImage to the bounds of the face, it is nowhere near correct. My face detection code looks like this:
-(NSArray *)facesForImage:(UIImage *)image {
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSDictionary *opts = #{CIDetectorAccuracy : CIDetectorAccuracyHigh};
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:opts];
NSArray *features = [detector featuresInImage:ciImage];
return features;
}
...and the code to crop the image looks like this:
-(UIImage *)imageCroppedToFaceAtIndex:(NSInteger)index forImage:(UIImage *)image {
NSArray *faces = [self facesForImage:image];
if((index < 0) || (index >= faces.count)) {
DDLogError(#"Invalid face index provided");
return nil;
}
CIFaceFeature *face = [faces objectAtIndex:index];
CGRect faceBounds = face.bounds;
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, faceBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
return croppedImage;
}
I have an image with only 1 face in it I'm using for testing, and it appears to detect it with no problem. But the crop is way off. Any idea what could be the problem with this code?
For anyone else having a similar issue -- transforming CGImage coordinates to UIImage coordinates -- I found this great article explaining how to use CGAffineTransform to accomplish exactly what I was looking for.
The code to convert the face geometry from Core Image to UIImage coordinates is fussy. I haven't messed with it in quite a while, but I remember it giving me fits, especially when dealing with images that are rotated.
I suggest looking at the demo app "SquareCam", which you can find with a search of the Xcode docs. It draws red squares around faces, which is a good start.
Note that the rectangle you get from Core Image is always a square, and sometimes crops a little too closely. You may have to make your cropping rectangles taller and wider.
This class does the trick! A quite flexible and handy override of UIImage. https://github.com/kylestew/KSMagicalCrop
Do It with this code It is worked for me.
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
//Container for the face attributes
UIView* faceContainer = [[UIView alloc] initWithFrame:facePicture.frame];
// flip faceContainer on y-axis to match coordinate system used by core image
[faceContainer setTransform:CGAffineTransformMakeScale(1, -1)];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
if(faceFeature.hasLeftEyePosition)
{
// create a UIView with a size based on the width of the face
leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasRightEyePosition)
{
// create a UIView with a size based on the width of the face
RightEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasMouthPosition)
{
// create a UIView with a size based on the width of the face
mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
}
[view addSubview:faceContainer];
CGFloat y = view.frame.size.height - (faceView.frame.origin.y+faceView.frame.size.height);
CGRect rect = CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([<Original Image> CGImage],rect);
croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
//----cropped Image-------//
UIImageView *img = [[UIImageView alloc]initWithFrame:CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height)];
img.image = croppedImage;
A track path is defined using data points with an origin (0,0) at top/left.
From these points a UIBezierPath is created.
The cars movement along the path is given as distance, from which percent is calculated.
A category on UIBezierPath provides the coordinates for the car along the path.
#import "UIBezierPath-Points.h"
CGPoint currCoordinates = [self.trackPath pointAtPercent:percent withSlope:nil];
The problem is that SpriteKit renders the track path upside-down.
In a SKScene class …
SKShapeNode *shapeNode = [[SKShapeNode alloc] init];
[shapeNode setPath:self.trackPath.CGPath]; <== path is rendered upside-down?
[shapeNode setPath:cgPath];
[shapeNode setStrokeColor:[UIColor blueColor]];
[self addChild:shapeNode];
I have attempted various transforms on the UiBezierPath, but cannot get the coordinates to convert to SpriteKit coordinates which I believe are centre based.
Does anyone know how to convert from UIBezierPath coordinates to SpriteKit (SKScene) coordinates?
*don't have enuf reputation to post image.
Excuse me was thinking that it was self-explanatory.
from Source
void ApplyCenteredPathTransform(UIBezierPath *path, CGAffineTransform transform)
{
CGRect rect = CGPathGetPathBoundingBox(path.CGPath);
CGPoint center = CGPointMake(CGRectGetMidX(rect), CGRectGetMidY(rect));
CGAffineTransform t = CGAffineTransformIdentity;
t = CGAffineTransformTranslate(t, center.x, center.y);
t = CGAffineTransformConcat(transform, t);
t = CGAffineTransformTranslate(t, -center.x, -center.y);
[path applyTransform:t];
}
void ScalePath(UIBezierPath *path, CGFloat sx, CGFloat sy)
{
CGAffineTransform t = CGAffineTransformMakeScale(sx, sy);
ApplyCenteredPathTransform(path, t);
}
then
ScalePath(path, 1.0, -1.0);
I have a CABasicAnimation that creates an iris wipe effect on an image. In short, the animation works fine on the simulator but there is no joy on device. The timers still fire correctly and the animationCompleted block gets called however there is no visible animation.
Here is the code to get the iris wipe working:
- (void)irisWipe
{
animationCompletionBlock theBlock;
_resultsImage.hidden = FALSE;//Show the image view
[_resultsImage setImage:[UIImage imageNamed:#"logoNoBoarder"]];
[_resultsImage setBackgroundColor:[UIColor clearColor]];
[_resultsImage setFrame:_imageView.bounds];
//Create a shape layer that we will use as a mask for the waretoLogoLarge image view
CAShapeLayer *maskLayer = [CAShapeLayer layer];
CGFloat maskHeight = _resultsImage.layer.bounds.size.height;
CGFloat maskWidth = _resultsImage.layer.bounds.size.width;
CGPoint centerPoint;
centerPoint = CGPointMake( maskWidth/2, maskHeight/2);
//Make the radius of our arc large enough to reach into the corners of the image view.
CGFloat radius = sqrtf(maskWidth * maskWidth + maskHeight * maskHeight)/2;
// CGFloat radius = MIN(maskWidth, maskHeight)/2;
//Don't fill the path, but stroke it in black.
maskLayer.fillColor = [[UIColor clearColor] CGColor];
maskLayer.strokeColor = [[UIColor blackColor] CGColor];
maskLayer.lineWidth = radius; //Make the line thick enough to completely fill the circle we're drawing
// maskLayer.lineWidth = 10; //Make the line thick enough to completely fill the circle we're drawing
CGMutablePathRef arcPath = CGPathCreateMutable();
//Move to the starting point of the arc so there is no initial line connecting to the arc
CGPathMoveToPoint(arcPath, nil, centerPoint.x, centerPoint.y-radius/2);
//Create an arc at 1/2 our circle radius, with a line thickess of the full circle radius
CGPathAddArc(arcPath,
nil,
centerPoint.x,
centerPoint.y,
radius/2,
3*M_PI/2,
-M_PI/2,
NO);
maskLayer.path = arcPath;
//Start with an empty mask path (draw 0% of the arc)
maskLayer.strokeEnd = 1.0;
CFRelease(arcPath);
//Install the mask layer into out image view's layer.
_resultsImage.layer.mask = maskLayer;
//Set our mask layer's frame to the parent layer's bounds.
_resultsImage.layer.mask.frame = _resultsImage.layer.bounds;
//Create an animation that increases the stroke length to 1, then reverses it back to zero.
CABasicAnimation *swipe = [CABasicAnimation animationWithKeyPath:#"strokeEnd"];
swipe.duration = 1;
swipe.delegate = self;
[swipe setValue: theBlock forKey: kAnimationCompletionBlock];
swipe.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear];
swipe.fillMode = kCAFillModeForwards;
swipe.removedOnCompletion = NO;
swipe.autoreverses = NO;
swipe.toValue = [NSNumber numberWithFloat: 0];
//Set up a completion block that will be called once the animation is completed.
theBlock = ^void(void)
{
NSLog(#"completed");
};
[swipe setValue: theBlock forKey: kAnimationCompletionBlock];
// doingMaskAnimation = TRUE;
[maskLayer addAnimation: swipe forKey: #"strokeEnd"];
}
Is there something in iOS7 I should be aware of when working with CAAnimations etc? OR is there a error in the code?
Note this code was sourced from: How do you achieve a "clock wipe"/ radial wipe effect in iOS?
I think the problem (or at least part of it) may be this line:
[_resultsImage setFrame:_imageView.bounds];
That should read
[_resultsImage setBounds:_imageView.bounds];
Instead. If you set the FRAME to the bounds of the image view, you're going to move the image view to 0.0 in its superview.
I'm also not clear what _imageView is, as opposed to _resultsImage.
I would step through your code in the debugger, looking at the frame rectangles that are being set for the image view and mask layer, and all the other values that are calculated.