OpenCV warpAffine with Objective-C iOS - ios

I am currently working with warpAffine to rotate an UI Image via converting to cv::Mat. However when I rotate the Mat via warpAffine and convert to UIImage, it only shows one color only.
Below is my code:
- (void) testRotation {
Mat clone;
Mat sampleClone;
UIImage* sampleImage = [UIImage imageNamed:#"affine_test"];
UIImageToMat(sampleImage, sampleClone);
UIImage* preRotateSampleClone = MatToUIImage(sampleClone);
cv::warpAffine(sampleClone, clone,
cv::getRotationMatrix2D(cv::Point((sampleClone.cols) / 2.0, (sampleClone.rows) / 2.0), 90, 1),
sampleClone.size());
UIImage* postRotateClone = MatToUIImage(clone);
}
Original image:
original image
Result image:
result image
Is there any issue with my code above? Thank you.

Related

Cropping UIImage Using VNRectangleObservation

I'm using the Vision framework to detect rectangular documents in a captured photo. Detecting and drawing a path around the document is working perfectly. I then want to crop the image to be only the detected document. I'm successfully cropping the image, but it seems the coordinates don't line up and the cropped image is only part of the detected document and the rest is just the desk behind the document. I'm using the following cropping code:
private UIImage CropImage(UIImage image, CGRect rect, float scale)
{
var drawRect = new CGRect(rect.X, rect.Y, rect.Size.Width, rect.Size.Height);
using (var cgImage = image.CGImage.WithImageInRect(drawRect))
{
var croppedImage = UIImage.FromImage(cgImage);
return croppedImage;
};
}
Using the following parameters:
image is the same UIImage that i successfully drew the rectangle path on.
rect is the VNRectangleObservation.BoundingBox. This is normalized so i'm scaling it using the image.size. it's the same scaling i do when drawing the rectangle path.
scale is 1f, but i'm currently ignoring this.
The cropped image generally seems to be the right size, but it is shifted up and to the left which cuts off the lower and right side of the document. Any help would be appreciated.
for anyone else that finds this, the issue seemed to be CGImage rotating when cropping the image which caused the VNRectangleObservation to not line up anymore. I used this article, Tracking and Altering Images, to get a working solution using CIFilter. Cropping code follows:
var ciFilter = CIFilter.FromName("CIPerspectiveCorrection");
if (ciFilter == null) continue;
var width = inputImage.Extent.Width;
var height = inputImage.Extent.Height;
var topLeft = new CGPoint(observation.TopLeft.X * width, observation.TopLeft.Y * height);
var topRight = new CGPoint(observation.TopRight.X * width, observation.TopRight.Y * height);
var bottomLeft = new CGPoint(observation.BottomLeft.X * width, observation.BottomLeft.Y * height);
var bottomRight = new CGPoint(observation.BottomRight.X * width, observation.BottomRight.Y * height);
ciFilter.SetValueForKey(new CIVector(topLeft), new NSString("inputTopLeft"));
ciFilter.SetValueForKey(new CIVector(topRight), new NSString("inputTopRight"));
ciFilter.SetValueForKey(new CIVector(bottomLeft), new NSString("inputBottomLeft"));
ciFilter.SetValueForKey(new CIVector(bottomRight), new NSString("inputBottomRight"));
var ciImage = inputImage.CreateByApplyingOrientation(CGImagePropertyOrientation.Up);
ciFilter.SetValueForKey(ciImage, CIFilterInputKey.Image);
var outputImage = ciFilter.OutputImage;
var uiImage = new UIImage(outputImage);
imageList.Add(uiImage);
imageList is a List<UImage> since i'm handling multiple detected rectangles.
observation is a single observation of type VNRectangleObservation.
The cropped image generally seems to be the right size, but it is shifted up and to the left which cuts off the lower and right side of the document.
From Apple documentation CGImageCreateWithImageInRect , there is a discussion about the cropped size .
CGImageCreateWithImageInRect performs the following tasks to create the subimage:
It calls the CGRectIntegral function to adjust the rect parameter to integral bounds.
It intersects the rect with a rectangle whose origin is (0,0) and size is equal to the size of the image specified by the image parameter.
It reads the pixels within the resulting rectangle, treating the first pixel within as the origin of the subimage.
If W and H are the width and height of image, respectively, then the point (0,0) corresponds to the first pixel of the image data. The point (W–1, 0) is the last pixel of the first row of the image data, while (0, H–1) is the first pixel of the last row of the image data and (W–1, H–1) is the last pixel of the last row of the image data.
Then you can check in your local project with an image (size is : 1920 * 1080) as follow:
UIImageView imageView = new UIImageView(new CGRect(0, 400, UIScreen.MainScreen.Bounds.Size.Width, 300));
UIImage image = new UIImage("th.jpg");
imageView.Image = CropImage(image, new CGRect(0, 0, 1920, 1080), 1);
View.AddSubview(imageView);
The CropImage Method :
private UIImage CropImage(UIImage image, CGRect rect, float scale)
{
var drawRect = new CGRect(rect.X, rect.Y, rect.Size.Width, rect.Size.Height);
using (var cgImage = image.CGImage.WithImageInRect(drawRect))
{
if(null != cgImage)
{
var croppedImage = UIImage.FromImage(cgImage);
return croppedImage;
}
else
{
return image;
}
};
}
This will show the Original Size of Image :
Now you can modify the cropped size as follow :
UIImageView imageView = new UIImageView(new CGRect(0, 400, UIScreen.MainScreen.Bounds.Size.Width, 300));
UIImage image = new UIImage("th.jpg");
imageView.Image = CropImage(image, new CGRect(0, 0, 1920, 100), 1);
View.AddSubview(imageView);
Here I set x = 0 , y = 0 , that means from (0,0) to start , and width is 1920 ,height is 100 . I just crop the height of the original Image . The effect as follow :
Then if you modify the x/y ,the cropped image will move to other area to crop .As follow:
UIImageView imageView = new UIImageView(new CGRect(0, 400, UIScreen.MainScreen.Bounds.Size.Width, 300));
UIImage image = new UIImage("th.jpg");
imageView.Image = CropImage(image, new CGRect(0, 100, 1920, 100), 1);
View.AddSubview(imageView);
Then you will see it's different with the second effect :
So when cropping an image , you should understand the drawRect of image.CGImage.WithImageInRect(drawRect) clearly .
Note from doc :
Be sure to specify the subrectangle's coordinates relative to the original image's full size, even if the UIImageView shows only a scaled version.

Antialiased pixels from Swift code is converted to unwanted black pixels by OpenCV

i am trying to make some pixels transparent using swift code by setting anti-aliasing true and remove path pixels to transparent. For further process i had sent UIimage to opencv using c++ that convert edge of path to black pixels.
i want to remove that unwanted black pixels. How can i remove those blacks pixels generated by opencv?
in opencv just converting image to mat and from mat to UIImage, the same problem occurs.
swift code:
guard let imageSize=image.size else{
return
}
UIGraphicsBeginImageContextWithOptions(imageSize,false,1.0)
guard let context = UIGraphicsGetCurrentContext() else{
return
}
context.setShouldAntialias(true)
context.setAllowsAntialiasing(true)
context.setShouldSubpixelQuantizeFonts(true)
context.interpolationQuality = .high
imgCanvas.image?.draw(in: CGRect(x: 0, y:0, width: (imgCanvas.image?.size.width)!, height: (imgCanvas.image?.size.height)!))
bezeirPath = UIBezierPath()
bezeirPath.move(to: fromPoint)
bezeirPath.addLine(to: toPoint)
bezeirPath.lineWidth=(CGFloat(widthOfLine) * scaleX )/scrollView.zoomScale
bezeirPath.lineCapStyle = CGLineCap.round
bezeirPath.lineJoinStyle=CGLineJoin.round
bezeirPath.flatness=0.0
bezeirPath.miterLimit=0.0
bezeirPath.usesEvenOddFillRule=true
UIColor.white.setStroke()
bezeirPath.stroke(with: .clear, alpha:0)
bezeirPath.close()
bezeirPath.fill()
UIColor.clear.set()
context.addPath(bezeirPath.cgPath)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
opencv code:
Mat source;
UIImageToMat(originalImage,source,true);
return MatToUIImage(source);
I tried various ways to solve this issue, looking at different sources, but none worked. I have been trying to solve this for the past 3 days. So please if anybody has even any clue related to this issue, that would be helpful!
[
I would supply the RGB image and the A mask as separate images to OpenCV. Draw your mask into a single channel image:
guard let imageSize=image.size else{
return
}
UIGraphicsBeginImageContextWithOptions(imageSize,false,1.0)
guard let context = UIGraphicsGetCurrentContext() else{
return
}
context.setShouldAntialias(true)
context.setAllowsAntialiasing(true)
context.setShouldSubpixelQuantizeFonts(true)
context.interpolationQuality = .high
context.setFillColor(UIColor.black.cgColor)
context.addRect(CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
context.drawPath(using: .fill)
bezeirPath = UIBezierPath()
bezeirPath.move(to: fromPoint)
bezeirPath.addLine(to: toPoint)
bezeirPath.lineWidth=(CGFloat(widthOfLine) * scaleX )/scrollView.zoomScale
bezeirPath.lineCapStyle = CGLineCap.round
bezeirPath.lineJoinStyle=CGLineJoin.round
bezeirPath.flatness=0.0
bezeirPath.miterLimit=0.0
bezeirPath.usesEvenOddFillRule=true
UIColor.white.setStroke()
bezeirPath.stroke()
bezeirPath.close()
bezeirPath.fill()
let maskImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Then in OpenCV, you can apply the A image to your RGB image.
Mat source, sourceMask
UIImageToMat(image, source, true)
UIImageToMat(maskImage, sourceMask, true)
If your image isn't RGB, you can convert it:
cvtColor(source, source, CV_RGBA2RGB)
If your mask isn't single channel, you can convert it:
cvtColor(sourceMask, sourceMask, CV_RGBA2GRAY)
Then split the RGB image into channels:
Mat rgb[3];
split(source, rgb);
Then create RGBA image with the RGB channels and alpha channel:
Mat imgBGRA;
vector<Mat> channels = {rgb[0], rgb[1], rgb[2], sourceMask};
merge(channels, imgBGRA);
Since your mask was created with anti-aliasing, the image created above will also have anti-aliased alpha.
OpenCV can help you find edges using Canny Edge Detector You can refer for the implementation in this link
OpenCV's Canny Edge Detection in C++
Furthermore, I would like you to first do a bit of research on your own before actually asking for questions here.

Improving Tesseract OCR Quality Fails

I am currently using tesseract to scan receipts. The quality wasn't good so I read this article on how to improve it: https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality#noise-removal. I implemented resizing, deskewing(aligning), and gaussian blur. But none of them seem to have a positive effect on the accuracy of the OCR except the deskewing. Here is my code for resizing and gaussian blur. Am I doing anything wrong? If not, what else can I do to help?
Code:
+(UIImage *) prepareImage: (UIImage *)image{
//converts UIImage to Mat format
Mat im = cvMatWithImage(image);
//grayscale image
Mat gray;
cvtColor(im, gray, CV_BGR2GRAY);
//deskews text
//did not provide code because I know it works
Mat preprocessed = preprocess2(gray);
double skew = hough_transform(preprocessed, im);
Mat rotated = rot(im,skew* CV_PI/180);
//resize image
Mat scaledImage = scaleImage(rotated, 2);
//Guassian Blur
GaussianBlur(scaledImage, scaledImage, cv::Size(1, 1), 0, 0);
return UIImageFromCVMat(scaledImage);
}
// Organization -> Resizing
Mat scaleImage(Mat mat, double factor){
Mat resizedMat;
double width = mat.cols;
double height = mat.rows;
double aspectRatio = width/height;
resize(mat, resizedMat, cv::Size(width*factor*aspectRatio, height*factor*aspectRatio));
return resizedMat;
}
Receipt:
If you read the Tesseract documentation you will see that tesseract engine works best with texts in a single line in a square. Passing it the whole receipt image reduces the engine's accuracy. What you need to do is use the new iOS framework CITextFeature to detect texts in your receipt into multiple blocks of images. Then only you can pass those images to tesseract for processing.

SceneKit, flip direction of SCNMaterial

extremely new to SceneKit, so just looking for help here:
I have an SCNSphere with a camera at the center of it
I create an SCNMaterial, doubleSided, and assign it to the sphere
Since the camera is at the center, the image looks flipped vertically, which when having text inside totally messes things up.
So how can i flip the material, or the image (although later it will be frames from a video), any other suggestion is welcome.
This solution, btw, is failing on me, normalImage is applied as a material (but the image is flipped when looking from inside the sphere), but assigning flippedImage results in no material whatsoever (white screen)
let normalImage = UIImage(named: "text2.png")
let ciimage = CIImage(CGImage: normalImage!.CGImage!)
let flippeCIImage = ciimage.imageByApplyingTransform(CGAffineTransformMakeScale(-1, 1))
let flippedImage = UIImage(CIImage: flippeCIImage, scale: 1.0, orientation: .Left)
sceneMaterial.diffuse.contents = flippedImage
sceneMaterial.specular.contents = UIColor.whiteColor()
sceneMaterial.doubleSided = true
sceneMaterial.shininess = 0.5
Instead of scaling the node (which may break your lighting) you can flip the mapping using SCNMaterialProperty's contentsTransform property:
material.diffuse.contentsTransform = SCNMatrix4MakeScale(1,-1,1)
material.diffuse.wrapT = SCNWrapModeRepeat // or translate contentsTransform by (0,1,0)
To flip the image horizontally:
material.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(-1, 1, 1), 1, 0, 0)
to flip it vertically:
material.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0)
this worked for me, flipping the normal of the geometry by scaling the node it's attached to:
sphereNode.scale = SCNVector3Make(-1, 1, 1)
The accepted answer will not work, guaranteed. Following is how to flip an image that is assigned as the value of the [material].diffuse.contents property; it assumes that two cubes in the scene, side-by-side:
// Define the matrices that perform the two orientation variants
SCNMatrix4 flip_horizontal;
flip_horizontal = SCNMatrix4Translate(SCNMatrix4MakeScale(-1, 1, 1), 1, 0, 0);
SCNMatrix4 flip_vertical;
flip_vertical = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0);
// Create the material objects for each cube, and assign an image as the contents
self.source_material = [SCNMaterial material];
self.source_material.diffuse.contents = [UIImage imageNamed:#"towelface.png"];
self.mirror_material = [SCNMaterial material];
self.mirror_material.diffuse.contents = self.source_material.diffuse.contents;
Pick only one of the following sections (as defined by the comments):
// PortraitOpposingDown
[self.mirror_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_vertical)];
[self.source_material.diffuse setContentsTransform:SCNMatrix4Mult(self.mirror_material.diffuse.contentsTransform, flip_horizontal)];
// PortraitFacingDown
[self.source_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_vertical)];
[self.mirror_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_horizontal)];
// PortraitOpposingUp
[self.source_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_horizontal)];
// PortraitFacingUp
[self.mirror_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_horizontal)];
Insert the material at the desired index:
[cube[0] insertMaterial:self.source_material atIndex:0];
[cube[1] insertMaterial:self.mirror_material atIndex:0];
By the way, to insert a new image (such as for live video), simply replace the material at the index specified by the insertMaterial:atIndex method; do not reorient the contentsTransform. The following code shows you how; it assumes that your video camera is configured to output a sample buffer for each frame it captures to an AVCaptureVideoDataOutputSampleBufferDelegate, and the requisite code (CreateCGImageFromCVPixelBuffer) to create a CGImage from a CVPixelBuffer:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CGImageRef cgImage;
CreateCGImageFromCVPixelBuffer(pixelBuffer, &cgImage);
UIImage *image = [UIImage imageWithCGImage:cgImage];
dispatch_async(dispatch_get_main_queue(), ^{
self.source_material.diffuse.contents = image;
[cube[0] replaceMaterialAtIndex:0 withMaterial:self.source_material];
self.mirror_material.diffuse.contents = self.source_material.diffuse.contents;
[cube[1] replaceMaterialAtIndex:0 withMaterial:self.mirror_material];
});
CGImageRelease(cgImage);
}
If you'd like actual code instead of my assumptions of code on your end, please ask. Here's a short video showing this code in action, but with live video instead of a static image.

How do I rotate an image from a file with Xamarin on iOS

I have been trying to rotate an image for a couple days now, but the best I get is still a black image.
I suspect it may have something to do with the point I'm rotating around but I'm not sure. I say that because I tried the whole solution proposed here and translated in Xamarin terms, but that didn't work.
Here's my code:
public void Rotate (string sourceFile, bool isCCW){
using (UIImage sourceImage = UIImage.FromFile(sourceFile))
{
var sourceSize = sourceImage.Size;
UIGraphics.BeginImageContextWithOptions(new CGSize(sourceSize.Height, sourceSize.Width), true, 1.0f);
CGContext bitmap = UIGraphics.GetCurrentContext();
// rotating before DrawImage didn't work, just got the image cropped inside a rotated frame
// bitmap.RotateCTM((float)(isCCW ? Math.PI / 2 : -Math.PI / 2));
// swapped Width and Height because the image is rotated
bitmap.DrawImage(new CGRect(0, 0, sourceSize.Height, sourceSize.Width), sourceImage.CGImage);
// rotating after causes the resulting image to be just black
bitmap.RotateCTM((float)(isCCW ? Math.PI / 2 : -Math.PI / 2));
var resultImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
if (targetFile.ToLower().EndsWith("png"))
resultImage.AsPNG().Save(sourceFile, true);
else
resultImage.AsJPEG().Save(sourceFile, true);
}
}
It looks like you want to take a UIImage, and then rotate it either 90 clockwise or 90 degrees counter clockwise. You can actually do this with just a few lines of code:
public void RotateImage(ref UIImage imageToRotate, bool isCCW)
{
var imageRotation = isCCW ? UIImageOrientation.Right : UIImageOrientation.Left;
imageToRotate = UIImage.FromImage(imageToRotate.CGImage, imageToRotate.CurrentScale, imageRotation);
}
We use UIImage.FromImage() that accepts 3 parameters. The first is a CGImage, from which we can get from the UIImage you're trying to rotate. The 2nd parameter is the scale of the image. The 3rd parameter is the important one. We can rotate it using UIImageOrientation.Right (90 degrees CCW) or UIImageOrientation.Left (90 degrees CW). You can check out the Apple documentation for the meaning of the other UIImageOrientation constants:
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImage_Class/index.html#//apple_ref/c/tdef/UIImageOrientation
UPDATE:
Note that the code above only changes EXIF flags and calling it twice doesn't rotate the image 180deg.
Add this code to make the result cumulative:
UIGraphics.BeginImageContextWithOptions(new CGSize((float)h, (float)w), true, 1.0f);
imageToRotate.Draw(new CGRect(0, 0, (float)h, (float)w));
var resultImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
imageToRotate = resultImage;

Resources