How to make a saturation value slider with OpenCV in Xcode - ios

I want to make a slider that can change the saturation of the image in an image view.
I'm currently using OpenCV. I've found some code on the web and tried it. It's working but it works in a little bit strange way..There is a white cup on the image but its color goes all the way rainbow regardless of the value(unless it's totally grayscale).
- (IBAction)stSlider:(id)sender {
float value = stSlider.value;
UIImage *image = [UIImage imageNamed:#"sushi.jpg"];
cv::Mat mat = [self cvMatFromUIImage:image];
cv::cvtColor(mat, mat, CV_RGB2HSV);
for (int i=0; i<mat.rows;i++)
{ for (int j=0; j<mat.cols;j++)
{
int idx = 1;
mat.at<cv::Vec3b>(i,j)[idx] = value;
}
}
cv::cvtColor(mat, mat, CV_HSV2RGB);
imageView.image = [self UIImageFromCVMat:mat];
}
This is the code I used.
Please tell me which part I have to change to make it work right.

Related

How to change colour of individual pixel of UIImage/UIImageView

I have a UIImageView to which I have applied the filter:
testImageView.layer.magnificationFilter = kCAFilterNearest;
So that the individual pixels are visible. This UIImageView is within a UIScrollView, and the image itself is 1000x1000. I have used the following code to detect which pixel has been tapped:
I first set up a tap gesture recognizer:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(#"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
I would like to be able to tap a pixel, and have its colour change. However, none of the StackOverflow posts I have found have answers which work or are not outdated. For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped.
All help is appreciated.
Edit for originaluser2:
After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
The code clearly works, as demonstrated by me testing it on my phone. However, the same code has produced a few issues in my own project. Though I have the suspicion that they are all caused by one or two simple central issues. How can I solve these errors?
You'll want to break this problem up into multiple steps.
Get the coordinates of the touched point in the image coordinate system
Get the x and y position of the pixel to change
Create a bitmap context and replace the given pixel's components with your new color's components.
First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.
#interface UIImageView (PointConversionCatagory)
#property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
#property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
#end
#implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
#end
There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.
Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
#interface UIImage (UIImagePixelManipulationCatagory)
#end
#implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
#end
What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.
The important bit of this method is:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.
Finally, we get out an image from the bitmap context and perform some cleanup.
Finished Result:
Full Project: https://github.com/hamishknight/Pixel-Color-Changing
You could try something like the following:
UIImage *originalImage = [UIImage imageNamed:#"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Receiving Memory Warning in iPad after drawing lines on large size image with CGContextRef

I am drawing lines on UIImage by using CGContextRef.My images are very large in resolution like 3000x4500.It doesn't give me memory warning if I draw a single line but if I would draw more than one line then it gives memory warning and after that my app crashes.Tried to release the CGContextRef object but got an error. My code :
UIGraphicsBeginImageContext(imageView.image.size);
[imageView.image drawAtPoint:CGPointMake(0, 0)];
context2=UIGraphicsGetCurrentContext();
for(int i=0; i<kmtaObject.iTotalSize; i++)
{
kmtaGroup=&kmtaObject.KMTA_GROUP_OBJ[i];
//NSLog(#"Group # = %d",i);
for (int j=0; j<kmtaGroup->TotalLines; j++)
{
lineObject=&kmtaGroup->Line_INFO_OBJ[j];
// NSLog(#"Line # = %d",j);
// NSLog(#"*****************");
x0 = lineObject->x0;
y0= lineObject->y0;
x1= lineObject->x1;
y1= lineObject->y1;
color= lineObject->Color;
lineWidth= lineObject->LinkWidth;
lineColor=[self add_colorWithRGBAHexValue:color];
linearColor=lineColor;
// Brush width
CGContextSetLineWidth(context2, lineWidth);
// Line Color
CGContextSetStrokeColorWithColor(context2,[linearColor CGColor]);
CGContextMoveToPoint(context2, x0, y0);
CGContextAddLineToPoint(context2, x1, y1);
CGContextStrokePath(context2);
}
}
newImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView.image=newImage;
Just the image will require 54MB of memory (3000 * 4500 * 4).
Since only a portion can be displayed at a time consider dividing the image into several sections like map tiles as used in Apple Maps.

How to create an image with black and white pixels?

I'm trying to create a black and white image that on the one hand can be displayed in an UIImageView and on the other hand it should be possible to share this image via e-mail or just be saved to camera roll.
I've got a two dimensional array (NSArray containing other NSArrays) which contains a matrix with the NSInteger values 0 and 1 (for a white and a black pixel).
So I just want to place a black pixel for a 1 and a white one for a 0.
All other questions I found deal with changing pixels from an existing image.
I hope you can help me!
[EDIT]:
Of course, I didn't want someone to do my work. I couldn't figure out how to create a new image and place the pixels as I wanted to at first, so I tried to edit an existing image that only consists of white pixels and change the color of the pixel if necessary. So my code for trying this, but as you can see I had no idea how the change the pixel. I hope that shows that i was trying on my own.
- (UIImage*) createQRCodeWithText:(NSString*) text andErrorCorrectionLevel:(NSInteger) level {
QRGenerator *qr = [[QRGenerator alloc] init];
NSArray *matrix = [qr createQRCodeMatrixWithText:text andCorrectionLevel:level];
UIImage *image_base = [UIImage imageNamed:#"qr_base.png"];
CGImageRef imageRef = image_base.CGImage;
for (int row = 0; row < [matrix count]; row++) {
for (int column = 0; column < [matrix count]; column++) {
if ([[[matrix objectAtIndex:row] objectAtIndex:column] integerValue] == 1) {
//set pixel (column, row) black
}
}
}
return [[UIImage alloc] initWithCGImage:imageRef];
}
I would create the image from scratch using a CGBitmapContext. You can call CGBitmapContextCreate() to allocate the memory for the image. You can then walk through the pixels just like you're doing now, setting them from your array. When you've finished, you can call CGBitmapContextCreateImage() to make a CGImageRef out of it. If you need a UIImage you can call [+UIImage imageWithCGImage:] and pass it the CGImageRef you created above.

How to get the coordinates of each pixel of custom uiimage?

Hi everyone
I need to write the simple puzzle game and the main condition is that when the piece of puzzle is close to its destination when it is "released" it gets there exactly where it should be.
So I tried to get the array of cordinates of each pixel of image, to do this I want to compare the pixels color with background color and if them are not equal, that is the coordinate of images pixel. But.. I don`t how to do this.
I tried:
- (BOOL)isImagePixel:(UIImage *)image withX:(int)x andY:(int) y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
UInt8 red = data[pixelInfo];
UInt8 green = data[(pixelInfo + 1)];
UInt8 blue = data[pixelInfo + 2];
UInt8 alpha = data[pixelInfo + 3];
CFRelease(pixelData);
UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
NSLog(#"color is %#",[UIColor whiteColor]);
if ([color isEqual:self.view.backgroundColor]){
NSLog(#"x = %d, y = %d",x,y);
return YES;
}
else return NO;
}
What is wrong here?
Or maybe someone can suggest me another solution?
Thank you.
This appears to be a really cumbersome solution. My suggestion is that for every piece you maintain a table of say it's top left coordinate in the puzzle, and when the user lifts a finger you compute the absolute distance from the current location to the designated location.

OpenCV, IOS, template matching, place of good matching

i try to create an app like this: http://www.youtube.com/watch?v=V9LY8JqKLqE&feature=my_liked_videos&list=LLIeJ9s3lwD-lrqYMU409iAQ
but sadly i dont know how to mark the place of finding
i was re-thinking this tutorial: http://aptogo.co.uk/2011/09/face-tracking/
my source code:
i implement the template image into the DemoVideoCaptureViewController.mm file
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *testImage = [UIImage imageNamed:#"tt2.jpg"];
tempMat = [testImage CVMat];
std::vector<cv::KeyPoint> keypoints;
cv::SurfFeatureDetector surf (250);
surf.detect(tempMat, keypoints);
cv::SurfDescriptorExtractor surfDesc;
surfDesc.compute(tempMat, keypoints, description1);
}
and i try to find object here:
- (void)processFrame:(cv::Mat &)mat videoRect:(CGRect)rect videoOrientation:(AVCaptureVideoOrientation)videOrientation
{
cv::FlannBasedMatcher matcher;
std::vector< cv::vector<cv::DMatch> > matches;
std::vector<cv::DMatch> good_matches;
cv::SurfFeatureDetector surf2 (250);
std::vector<cv::KeyPoint> kp_image;
surf2.detect(mat, kp_image);
cv::SurfDescriptorExtractor surfDesc2;
surfDesc2.compute(mat, kp_image, des_image);
if ((des_image.rows > 0) && (description1.rows > 0)) {
matcher.knnMatch(description1, des_image, matches, 2);
for (int i = 0; i < MIN(des_image.rows-1, (int) matches.size()); i++) {
if ((matches[i][0].distance < 0.6*(matches[i][1].distance)) && ((int) matches[i].size() <= 2 && (int) matches[i].size() > 0)) {
good_matches.push_back(matches[i][0]);
}
}
[CATransaction begin];
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
**//remove old layer**
for (CALayer *layer in self.view.layer.sublayers) {
NSString *layerName = [layer name];
if ([layerName isEqualToString:#"Layer"])
[layer setHidden:YES];
}
[CATransaction commit];
if (good_matches.size() >= 4) {
NSLog(#"Finding");
}
}
}
But i dont know how to put a layer on the camrea view
could someone help me?
The app in the video you posted can be created following the chapter 3 (Marker-less Augmented Reality) from the book "Mastering OpenCV with Practical Computer Vision Projects".
You still have to do some steps, like calculating the homography. And you don't need to use CATransaction or any other iOS class. CvVideoCamera and line is enough.

Resources