Output wrong eye position using CIfacefeature and SquareCam - ios

I am trying to write code which put a sticker on eyes and the code is based on SquareCam.
It detects faces well, but when I tried to output my image on left eye, it always gives wrong position even though I used the same ways on finding face rect.
There are results on my phone.
And the code is here.
for ff in features as! [CIFaceFeature] {
// find the correct position for the square layer within the previewLayer
// the feature box originates in the bottom left of the video frame.
// (Bottom right if mirroring is turned on)
var faceRect = ff.bounds
let temp = faceRect.origin.x
faceRect.origin.x = faceRect.origin.y
faceRect.origin.y = temp
// scale coordinates so they fit in the preview box, which may be scaled
let widthScaleBy = previewBox.size.width / clap.size.height
let heightScaleBy = previewBox.size.height / clap.size.width
faceRect.size.width *= widthScaleBy
faceRect.size.height *= heightScaleBy
faceRect.origin.x *= widthScaleBy
faceRect.origin.y *= heightScaleBy
var eyeRect : CGRect
eyeRect = CGRect()
eyeRect.origin.x = ff.leftEyePosition.y
eyeRect.origin.y = ff.leftEyePosition.x
eyeRect.origin.x *= widthScaleBy
eyeRect.origin.y *= heightScaleBy
eyeRect.size.width = faceRect.size.width * 0.15
eyeRect.size.height = eyeRect.size.width
if isMirrored {
faceRect = faceRect.offsetBy(dx: previewBox.origin.x + previewBox.size.width - faceRect.size.width - (faceRect.origin.x * 2), dy: previewBox.origin.y)
eyeRect = eyeRect.offsetBy(dx:previewBox.origin.x + previewBox.size.width - eyeRect.size.width - (eyeRect.origin.x * 2),dy : previewBox.origin.y)
} else {
faceRect = faceRect.offsetBy(dx: previewBox.origin.x, dy: previewBox.origin.y)
eyeRect = eyeRect.offsetBy(dx: previewBox.origin.x, dy:previewBox.origin.y)
}
print(eyeRect)
print(faceRect)
var featureLayer: CALayer? = nil
var eyeLayer : CALayer? = nil
// re-use an existing layer if possible
while featureLayer == nil && (currentSublayer < sublayersCount) {
let currentLayer = sublayers[currentSublayer];currentSublayer += 1
if currentLayer.name == "FaceLayer" {
featureLayer = currentLayer
currentLayer.isHidden = false
eyeLayer = featureLayer?.sublayers?[0]
//eyeLayer?.isHidden = false
}
}
// create a new one if necessary
if featureLayer == nil {
featureLayer = CALayer()
featureLayer!.contents = square.cgImage
featureLayer!.name = "FaceLayer"
previewLayer?.addSublayer(featureLayer!)
eyeLayer = CALayer()
eyeLayer!.contents = eyes.cgImage
eyeLayer!.name = "EyeLayer"
featureLayer?.addSublayer(eyeLayer!)
}
featureLayer!.frame = faceRect
eyeLayer!.frame = eyeRect

0,0 is at bottom left for the eyePositions, so you have to eyePosition.y = image.size.height - eyePosition.y to be in the same coordinate system as frames.

Related

Metal core image kernel with sampler

I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully.
extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) {
float2 coord1 = t1.coord();
float2 coord2 = t2.coord();
float4 innerRect = t2.extent();
float minX = innerRect.x + time*innerRect.z;
float minY = innerRect.y + time*innerRect.w;
float cropWidth = (1 - time) * innerRect.w;
float cropHeight = (1 - time) * innerRect.z;
float4 s1 = t1.sample(coord1);
float4 s2 = t2.sample(coord2);
if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) {
return s1;
} else {
return s2;
}
}
And it crashes on initialization.
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: Float = 0.0
static var kernel:CIColorKernel = { () -> CIColorKernel in
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!!
}()
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else {
return nil
}
guard let foregroundImage = foregroundImage else {
return nil
}
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime])
}
}
It crashes in the try line with the following error:
Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError
If I replace the kernel code with the following, it works like a charm:
extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time)
{
return mix(s1, s2, time);
}
So there are no obvious errors in the code, such as passing incorrect function name or so.
For your use case, you actually can use a CIColorKernel. You just have to pass the extent of your render destination to the kernel as well, then you don't need the sampler to access it.
The kernel would look like this:
extern "C" float4 wipeLinear(coreimage::sample_t t1, coreimage::sample_t t2, float4 destinationExtent, float time, coreimage::destination destination) {
float minX = destinationExtent.x + time * destinationExtent.z;
float minY = destinationExtent.y + time * destinationExtent.w;
float cropWidth = (1.0 - time) * destinationExtent.w;
float cropHeight = (1.0 - time) * destinationExtent.z;
float2 destCoord = destination.coord();
if ( destCoord.x > minX && destCoord.x < minX + cropWidth && destCoord.y > minY && destCoord.y <= minY + cropHeight) {
return t1;
} else {
return t2;
}
}
And you call it like this:
let destinationExtent = CIVector(cgRect: backgroundImage.extent)
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, destinationExtent, inputTime])
Note that the last destination parameter in the kernel is passed automatically by Core Image. You don't need to pass it with the arguments.
Yes, you can't use samplers in CIColorKernel or CIBlendKernel. Those kernels are optimized for the use case where you have a 1:1 mapping from input pixel to output pixel. This allows Core Image to execute multiple of these kernels in one command buffer since they don't require any intermediate buffer writes.
A sampler would allow you to sample the input at arbitrary coordinates, which is not allowed in this case.
You can simply use a CIKernel instead. It's meant to be used when you need to sample the input more freely.
To initialize the kernel, you need to adapt the code like this:
static var kernel: CIKernel = {
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: URL)
return try! CIKernel(functionName: "wipeLinear", fromMetalLibraryData: data)
}()
When calling the kernel, you now need to also provide a ROI callback, like this:
let roiCallback: CIKernelROICallback = { index, rect -> CGRect in
return rect // you need the same region from the input as the output
}
// or even shorter
let roiCallback: CIKernelROICallback = { $1 }
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, roiCallback: roiCallback, arguments: [backgroundImage, foregroundImage, inputTime])
Bonus answer:
For this blending effect, you actually don't need any kernel at all. You can achieve all that with simple cropping and compositing:
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: CGFloat = 0.0
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else { return nil }
guard let foregroundImage = foregroundImage else { return nil }
// crop the foreground based on time
var foregroundCrop = foregroundImage.extent
foregroundCrop.size.width *= inputTime
foregroundCrop.size.height *= inputTime
return foregroundImage.cropped(to: foregroundCrop).composited(over: backgroundImage)
}
}

Metal Depth Clamping

I want to disable clamping between far and close points. Already tred to modify sampler to disable clamp to edge (constexpr sampler s(address::clamp_to_zero) and it worked as expected for the edges, but coordinates between most far and close points are still clamping.
Current unwanted result:
https://gph.is/g/ZyWjkzW
Expected result:
https://i.imgur.com/GjvwgyU.png
Also tried encoder.setDepthClipMode(.clip) but it didn't worked.
Some portions of code:
let descriptor = MTLRenderPipelineDescriptor()
descriptor.colorAttachments[0].pixelFormat = .rgba16Float
descriptor.colorAttachments[1].pixelFormat = .rgba16Float
descriptor.depthAttachmentPixelFormat = .invalid
let descriptor = MTLRenderPassDescriptor()
descriptor.colorAttachments[0].texture = outputColorTexture
descriptor.colorAttachments[0].clearColor = clearColor
descriptor.colorAttachments[0].loadAction = .load
descriptor.colorAttachments[0].storeAction = .store
descriptor.colorAttachments[1].texture = outputDepthTexture
descriptor.colorAttachments[1].clearColor = clearColor
descriptor.colorAttachments[1].loadAction = .load
descriptor.colorAttachments[1].storeAction = .store
descriptor.renderTargetWidth = Int(drawableSize.width)
descriptor.renderTargetHeight = Int(drawableSize.height)
guard let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: descriptor) else { throw RenderingError.makeDescriptorFailed }
encoder.setDepthClipMode(.clip)
encoder.setRenderPipelineState(pipelineState)
encoder.setFragmentTexture(inputColorTexture, index: 0)
encoder.setFragmentTexture(inputDepthTexture, index: 1)
encoder.setFragmentBuffer(uniformsBuffer, offset: 0, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
encoder.endEncoding()

Could to cast value of type '__NSArray0' to 'CIRectangleFeature'

I have been using some code in Objective C that performs CIDetectoron a CIImage that is captured in AVCaptureStillImageOutput. My goal is to get it translated over to swift 3. I have it all translated over to swift 3 but I am getting this could not cast error in my CIRectangleFeature methods. So I am here for help as I've been working on this for days and can't get it right. I'm sure it's something simple that I am overlooking.
This is the code in Objective C
- (CIRectangleFeature *)_biggestRectangleInRectangles:(NSArray *)rectangles
{
if (!rectangles.count) return nil;
float halfPerimiterValue = 0;
CIRectangleFeature *biggestRectangle = rectangles.firstObject;
for (CIRectangleFeature *rect in rectangles)
{
CGPoint p1 = rect.topLeft;
CGPoint p2 = rect.topRight;
CGFloat width = hypotf(p1.x - p2.x, p1.y - p2.y);
CGPoint p3 = rect.topLeft;
CGPoint p4 = rect.bottomLeft;
CGFloat height = hypotf(p3.x - p4.x, p3.y - p4.y);
CGFloat currentHalfPerimiterValue = height + width;
if (halfPerimiterValue < currentHalfPerimiterValue)
{
halfPerimiterValue = currentHalfPerimiterValue;
biggestRectangle = rect;
}
}
return biggestRectangle;
}
This function is being called from another function and here it is in Objective C
- (CIRectangleFeature *)biggestRectangleInRectangles:(NSArray *)rectangles
{
CIRectangleFeature *rectangleFeature = [self _biggestRectangleInRectangles:rectangles];
Now this function is being called from within -(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection if an if returns true like so
if (self.isBorderDetectionEnabled)
{
if (_borderDetectFrame)
{
_borderDetectLastRectangleFeature = [self biggestRectangleInRectangles:[[self highAccuracyRectangleDetector] featuresInImage:image]];
_borderDetectFrame = NO;
}
It is also being called in another method that captures the images and saves them in basically the same manner
Now I have it translated to swift 3 like this
func bigRectangle(rectangles: [Any]) -> CIRectangleFeature {
var halfPerimiterValue: Float = 0
var biggestRectangles: CIRectangleFeature? = rectangles.first as! CIRectangleFeature? // This is the line causing the casting error
for rect: CIRectangleFeature in rectangles as! [CIRectangleFeature]{
let p1: CGPoint = rect.topLeft
let p2: CGPoint = rect.topRight
let width: CGFloat = CGFloat(hypotf(Float(p1.x) - Float(p2.x), Float(p1.y) - Float(p2.y)))
let p3: CGPoint = rect.topLeft
let p4: CGPoint = rect.bottomLeft
let height: CGFloat = CGFloat(hypotf(Float(p3.x) - Float(p4.x), Float(p3.y) - Float(p4.y)))
let currentHalfPerimiterValue: CGFloat = height + width
if halfPerimiterValue < Float(currentHalfPerimiterValue) {
halfPerimiterValue = Float(currentHalfPerimiterValue)
biggestRectangles = rect
}
}
return biggestRectangles!
}
I am calling it in swift 3 basically the same as the Objective C manner like this
func biggestRectangle(rectangles: [Any]) -> CIRectangleFeature {
let rectangleFeature: CIRectangleFeature? = self.bigRectangle(rectangles: rectangles)
That function is being called the same way as in Objective C like this.
if self.isEnableBorderDetection == true{
if self.borderDetectFrames == true {
self.borderDetectLastRectangleFeature = self.biggestRectangle(rectangles: [self.highAccuracyRectangleDetector().features(in: image)])
self.borderDetectFrames = false
}
From within the function func captureOutput(_ captureOutput: AVCaptureOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer?, from connection: AVCaptureConnection) {}
Now I have tried changing the array type from [Any] to [CIRectangleFeature] in both functions but then I have problems with CIFeature being an unrelated type.
Hopefully some one can have a look at this and point me in the right direction. Thanks in advance for any help.
UPD
You use double wrapping in Array:
let rectangleFeature: CIRectangleFeature? = self.biggestRectangle(rectangles: [self.highAccuracyRectangleDetector().features(in: enhancedImage!)])
So after trying some more stuff I finally figured it out with the help of Bimawa and this is the changes that I needed to make
if self.isEnableBorderDetection == true{
if self.borderDetectFrames == true {
let features = self.highAccuracyRectangleDetector().features(in: image)
self.borderDetectLastRectangleFeature = self.biggestRectangle(rectangles: features)
Basically I needed to change the way I was putting the CIFeatures into an array to be use by CIRectangleFeature functions

swift - Increase speed of objects over time

I'm looking for a way to increase the pace of my game the longer you play it, I would like to achieve this by increasing the frequency of obstacles generated either after a certain time i.e. every 30 seconds or preferably after 10 objects (trees), have been generated so the longer you play the harder it gets.
This is my current set up, I use repeatActionForever how could I change this to something like repeatAction10Times with a different delay variable for each loop?
//in didMoveToView
treeTexture1 = SKTexture(imageNamed: "tree")
treeTexture1.filteringMode = SKTextureFilteringMode.Nearest
var distanceToMove = CGFloat(self.frame.size.width + 0.1);
var moveTrees = SKAction.moveByX(-distanceToMove, y:0, duration:NSTimeInterval(0.006 * distanceToMove));
var removeTrees = SKAction.removeFromParent();
moveAndRemoveTrees = SKAction.sequence([moveTrees, removeTrees]);
var spawn = SKAction.runBlock({() in self.spawnTrees()})
//this delay is what I would like to alter for each loop
var delay = SKAction.waitForDuration(NSTimeInterval(1.2))
var spawnThenDelay = SKAction.sequence([spawn, delay])
var spawnThenDelayForever = SKAction.repeatActionForever(spawnThenDelay)
self.runAction(spawnThenDelayForever)
func spawnTrees() {
var tree = SKNode()
tree.position = CGPointMake( self.frame.size.width + treeTexture1.size().width * 2, 0 );
tree.zPosition = -10;
var height = UInt32( self.frame.size.height / 1 )
var height_max = UInt32( 220 )
var height_min = UInt32( 100 )
var y = arc4random_uniform(height_max - height_min + 1) + height_min;
var tree1 = SKSpriteNode(texture: treeTexture1)
tree1.position = CGPointMake(0.0, CGFloat(y))
tree1.physicsBody = SKPhysicsBody(rectangleOfSize: tree1.size)
tree1.physicsBody?.dynamic = false
tree1.physicsBody?.categoryBitMask = treeCategory;
tree1.physicsBody?.collisionBitMask = 0
tree1.physicsBody?.contactTestBitMask = 0
tree.addChild(tree1)
tree.runAction(moveAndRemoveTrees)
trees.addChild(tree)
}
You should try to use the simple action.speed code.
For example: Instead of running the action spawnThenDelay ten times and then running some code and repeating, try making a counter. Create a global variable at the very top of the code called counter, or whatever you want to call it. In spawnTrees(), change the code to this:
func spawnTrees() {
var tree = SKNode()
tree.position = CGPointMake( self.frame.size.width + treeTexture1.size().width * 2, 0 );
tree.zPosition = -10;
counter++
... }
And then in the update(), check to see if counter is above 10.
if counter == 10 {
self.actionForKey("spawnThenDelayForever").speed += 10.0 // Or some integer/float like that
counter = 0
}
Now, what this will do is run the code inside that if-statement for every 10 times you spawn something. But to do this, you'll have to update your calling of spawnAndThenDelayForever to add a key to reference it with.
self.runAction(spawnThenDelayForever, withKey "spawnThenDelayForever")
Let me know if there are any syntactical errors in what I gave you, or if it doesn't work quite right.

Monotouch: Changing the Hue of an image, not just Saturation

I have the following MonoTouch code which can change the Saturation , but I am trying to also change the Hue.
float hue = 0;
float saturation = 1;
if (colorCtrls == null)
colorCtrls = new CIColorControls() {
Image = CIImage.FromCGImage (originalImage.CGImage) };
else
colorCtrls.Image = CIImage.FromCGImage(originalImage.CGImage);
colorCtrls.Saturation = saturation;
var output = colorCtrls.OutputImage;
var context = CIContext.FromOptions(null);
var result = context.CreateCGImage(output, output.Extent);
return UIImage.FromImage(result);
It's part of a different filter so you'll need to use CIHueAdjust instead of CIColorControls to control the hue.
Here's what I ended up doing to add Hue:
var hueAdjust = new CIHueAdjust() {
Image = CIImage.FromCGImage(originalImage.CGImage),
Angle = hue // Default is 0
};
var output = hueAdjust.OutputImage;
var context = CIContext.FromOptions(null);
var cgimage = context.CreateCGImage(output, output.Extent);
return UIImage.FromImage(cgimage);
However, this does not work on Retina devices, the image returned is scaled incorrectly.

Resources