How to get the coordinates of each pixel of custom uiimage? - ios

Hi everyone
I need to write the simple puzzle game and the main condition is that when the piece of puzzle is close to its destination when it is "released" it gets there exactly where it should be.
So I tried to get the array of cordinates of each pixel of image, to do this I want to compare the pixels color with background color and if them are not equal, that is the coordinate of images pixel. But.. I don`t how to do this.
I tried:
- (BOOL)isImagePixel:(UIImage *)image withX:(int)x andY:(int) y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
UInt8 red = data[pixelInfo];
UInt8 green = data[(pixelInfo + 1)];
UInt8 blue = data[pixelInfo + 2];
UInt8 alpha = data[pixelInfo + 3];
CFRelease(pixelData);
UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
NSLog(#"color is %#",[UIColor whiteColor]);
if ([color isEqual:self.view.backgroundColor]){
NSLog(#"x = %d, y = %d",x,y);
return YES;
}
else return NO;
}
What is wrong here?
Or maybe someone can suggest me another solution?
Thank you.

This appears to be a really cumbersome solution. My suggestion is that for every piece you maintain a table of say it's top left coordinate in the puzzle, and when the user lifts a finger you compute the absolute distance from the current location to the designated location.

Related

How to change colour of individual pixel of UIImage/UIImageView

I have a UIImageView to which I have applied the filter:
testImageView.layer.magnificationFilter = kCAFilterNearest;
So that the individual pixels are visible. This UIImageView is within a UIScrollView, and the image itself is 1000x1000. I have used the following code to detect which pixel has been tapped:
I first set up a tap gesture recognizer:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(#"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
I would like to be able to tap a pixel, and have its colour change. However, none of the StackOverflow posts I have found have answers which work or are not outdated. For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped.
All help is appreciated.
Edit for originaluser2:
After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
The code clearly works, as demonstrated by me testing it on my phone. However, the same code has produced a few issues in my own project. Though I have the suspicion that they are all caused by one or two simple central issues. How can I solve these errors?
You'll want to break this problem up into multiple steps.
Get the coordinates of the touched point in the image coordinate system
Get the x and y position of the pixel to change
Create a bitmap context and replace the given pixel's components with your new color's components.
First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.
#interface UIImageView (PointConversionCatagory)
#property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
#property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
#end
#implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
#end
There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.
Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
#interface UIImage (UIImagePixelManipulationCatagory)
#end
#implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
#end
What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.
The important bit of this method is:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.
Finally, we get out an image from the bitmap context and perform some cleanup.
Finished Result:
Full Project: https://github.com/hamishknight/Pixel-Color-Changing
You could try something like the following:
UIImage *originalImage = [UIImage imageNamed:#"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Xcode - How to draw a View using thread?

I have graphic view which is drawn using -(void)drawRect:(CGRect)rect function, it is a little complex graphic which includes date as X axis and time as Y axis. So every time I draw it, it always takes 2,3 seconds to display the final graphic.The red point maybe red, green or yellow, its colour depends on the data read from data base.
My question is that how to draw this view using multithread or in a fast way.
For example, draw the X axis and Y axis in one thread, and draw each day's graphic in separated thread(using 30 threads) or draw each hour for 30 days' graphics in separated thread(using 5 threads)?
I have tried using the following function:
for(int i = 0; i < 31; i++){
NSString *threadStr = [NSString stringWithFormat:#"%i", i];
dispatch_queue_t queue1 = dispatch_queue_create([threadStr UTF8String], NULL);
dispatch_async(queue1, ^{
//Draw my graphics for one day
.....
});
}
But sometimes these threads read the database at the same time and some of them cannot get the data properly, this cause the final graphic is not complete.
Could anyone give me some suggestion?
Thank you very much.
I am trying to find out the reason why it delays 2, 3 seconds if I draw it using main thread.
Is it because of reading SQLite3 Data base ?
The procedure of drawing this graphic is :
- (void)drawRect:(CGRect)rect
{
1.draw X axis with date label;
2.draw Y axis with time label;
NSArray *timeArray = #[#"08:00", 11:00", 14:00", 17:00", 20:00"];
for(NSString *timeStr in timeArray){
for(int i = 0; i < 31; i++){
NSString *DateTimeStr = dateStr + timeStr;
int a = [DataManager getResult: DateTimeStr];
UIColor *RoundColor;
if(a == 1){
RoundColor = [UIColor colorWithRed:229/255.0 green:0/255.0 blue:145/255.0 alpha:1.0];
}
else if(a == 2){
RoundColor = [UIColor colorWithRed:0/255.0 green:164/255.0 blue:229/255.0 alpha:1.0];
}
else if(a == 3){
RoundColor = [UIColor colorWithRed:229/255.0 green:221/255.0 blue:0/255.0 alpha:colorAlpha];;
}
const CGFloat *components = CGColorGetComponents(RoundColor.CGColor);
CGContextSetRGBFillColor(context, components[0], components[1], components[2], components[3]);
CGContextFillEllipseInRect(context, CGRectMake(roundXnow,roundYnow,roundR,roundR));
}
}
}
I am thinking if the delay occurs when I am reading the SQLite3 data base, because I know there are a lot of record in my data base.
Drawing here is a very light operation, most likely the timing issues you're facing are related to the calculations (or fetching of data) or whatever else you're doing before actually drawing the dot. So use instruments, time profiler in particular. It will show you what operations take longer (look at the pikes in the graphic that will be displayed while your app is running).
This will give you the clue.
Besides as it's correctly mentioned here, all the UI should be performed in the main thread.
But those pre-calculations and fetches that you probably have with your data - that's where you should look for multithreaded solutions if it's at all possible.

unsigned char allocation comes nils in offset value objective c

I am getting the pixel colour values from touch points. I am successfully doing this but after sometimes app is giving the error( EXC_BAD_ACCESS(CODE=1,address=0x41f6864). Its memory allocation problem here is the source code for your reference.
- (UIColor *) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
#try{
{
CGImageRef inImage = drawImage.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL)
{
return nil; /* error */
}
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage (cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char *data = {0};
data=(unsigned char*) calloc(CGImageGetHeight(inImage) * CGImageGetWidth(inImage) , CGBitmapContextGetHeight(cgctx)*CGBitmapContextGetWidth(cgctx));
data= CGBitmapContextGetData (cgctx);
if( data !=NULL ) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
// NSLog(#"%s111111",data);
int alpha = data[offset]; /////// EXC_BAD_ACCESS(CODE=1,address=0x41f6864)
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
//NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
//CGImageRelease(*data);
CGContextRelease(cgctx);
// Free image data memory for the context
if (data)
{
free(data);
}
}
#catch (NSException *exception) {
}
return color;
}
The memory management in your code appears to be wrong:
Declare data and pointlessly assign a value to it:
unsigned char *data = {0};
Allocate a memory block and store a reference to it in data - overwriting the pointless initialisation:
data = (unsigned char *)calloc(CGImageGetHeight(inImage) * CGImageGetWidth(inImage), CGBitmapContextGetHeight(cgctx) * CGBitmapContextGetWidth(cgctx));
Now get a reference to a different memory block and store it in data, throwing away the reference to the calloc'ed block:
data = CGBitmapContextGetData (cgctx);
Do some other stuff and then free the block you did not calloc:
free(data);
If you are allocating your own memory buffer you should pass it to CGBitmapContextCreate, however provided you are using iOS 4+ there is no need to allocate your own buffer.
As to the memory access error, you are doing no checks on the value of point and your calculation would appear to be producing a value of offset which is incorrect. Add checks on the values of point and offset and take appropriate action if they are out of bounds (you will have to decide what that should be).
HTH
The problem may cause by the point is out of image rect,so you can use
try{
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f)
alpha:(alpha/255.0f)];
}catch(NSException e){
}
to avoid the EXC_BAD_ACCESS

How to make a saturation value slider with OpenCV in Xcode

I want to make a slider that can change the saturation of the image in an image view.
I'm currently using OpenCV. I've found some code on the web and tried it. It's working but it works in a little bit strange way..There is a white cup on the image but its color goes all the way rainbow regardless of the value(unless it's totally grayscale).
- (IBAction)stSlider:(id)sender {
float value = stSlider.value;
UIImage *image = [UIImage imageNamed:#"sushi.jpg"];
cv::Mat mat = [self cvMatFromUIImage:image];
cv::cvtColor(mat, mat, CV_RGB2HSV);
for (int i=0; i<mat.rows;i++)
{ for (int j=0; j<mat.cols;j++)
{
int idx = 1;
mat.at<cv::Vec3b>(i,j)[idx] = value;
}
}
cv::cvtColor(mat, mat, CV_HSV2RGB);
imageView.image = [self UIImageFromCVMat:mat];
}
This is the code I used.
Please tell me which part I have to change to make it work right.

iOS Performance Tuning: fastest way to get pixel color for large images

There are a number of questions/answers regarding how to get the pixel color of an image for a given point. However, all of these answers are really slow (100-500ms) for large images (even as small as 1000 x 1300, for example).
Most of the code samples out there draw to an image context. All of them take time when the actual draw takes place:
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage)
Examining this in Instruments reveals that the draw is being done by copying the data from the source image:
I have even tried a different means of getting at the data, hoping that getting to the bytes themselves would actually prove much more efficient.
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = CGImageCreateWithImageInRect(self.CGImage,
CGRectMake(pointX * self.scale,
pointY * self.scale,
1.0f,
1.0f));
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef data = CGDataProviderCopyData(provider);
CGImageRelease(cgImage);
UInt8* buffer = (UInt8*)CFDataGetBytePtr(data);
CGFloat red = (float)buffer[0] / 255.0f;
CGFloat green = (float)buffer[1] / 255.0f;
CGFloat blue = (float)buffer[2] / 255.0f;
CGFloat alpha = (float)buffer[3] / 255.0f;
CFRelease(data);
UIColor *pixelColor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
return pixelColor;
This method takes it's time on the data copy:
CFDataRef data = CGDataProviderCopyData(provider);
It would appear that it too is reading the data from disk, instead of the CGImage instance I am creating:
Now, this method, in some informal testing does perform better, but it is still not as fast I want it to be. Does anyone know of an even faster way of getting the underlying pixel data???
If it's possible for you to draw this image to the screen via OpenGL ES, you can get extremely fast random access to the underlying pixels in iOS 5.0 via the texture caches introduced in that version. They allow for direct memory access to the underlying BGRA pixel data stored in an OpenGL ES texture (where your image would be residing), and you could pick out any pixel from that texture almost instantaneously.
I use this to read back the raw pixel data of even large (2048x2048) images, and the read times are at worst in the range of 10-20 ms to pull down all of those pixels. Again, random access to a single pixel there takes almost no time, because you're just reading from a location in a byte array.
Of course, this means that you'll have to parse and upload your particular image to OpenGL ES, which will involve the same reading from disk and interactions with Core Graphics (if going through a UIImage) that you'd see if you tried to read pixel data from a random PNG on disk, but it sounds like you just need to render once and sample from it multiple times. If so, OpenGL ES and the texture caches on iOS 5.0 would be the absolute fastest way to read back this pixel data for something also displayed onscreen.
I encapsulate these processes in the GPUImagePicture (image upload) and GPUImageRawData (fast raw data access) classes within my open source GPUImage framework, if you want to see how something like that might work.
I have yet to find a way to get access to the drawn (in frame buffer) pixels. The fastest method I've measured is:
Indicate you want the image to be cached by specifying kCGImageSourceShouldCache when creating it.
(optional) Precache the image by forcing it to render.
Draw the image a 1x1 bitmap context.
The cost of this method is the cached bitmap, which may have a lifetime as long as the CGImage it is associated with. The code ends up looking something like this:
Create image w/ ShouldCache flag
NSDictionary *options = #{ (id)kCGImageSourceShouldCache: #(YES) };
CGImageSourceRef imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
CGImageRef cgimage = CGImageSourceCreateImageAtIndex(imageSource, 0, (__bridge CFDictionaryRef)options);
UIImage *image = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);
Precache image
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
Draw image to a 1x1 bitmap context
unsigned char pixelData[] = { 0, 0, 0, 0 };
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef cgimage = image.CGImage;
int imageWidth = CGImageGetWidth(cgimage);
int imageHeight = CGImageGetHeight(cgimage);
CGContextDrawImage(context, CGRectMake(-testPoint.x, testPoint.y - imageHeight, imageWidth, imageHeight), cgimage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
pixelData has the R, G, B, and A values of the pixel at testPoint.
A CGImage context is possibly nearly empty and contains no actual pixel data until you try to read the first pixel or draw it, so trying to speed up getting pixels from an image might not get you anywhere. There's nothing to get yet.
Are you trying to read pixels from a PNG file? You could try going directly after the file and mmap'ing it and decoding the PNG format yourself. It will still take awhile to pull the data from storage.
- (BOOL)isWallPixel: (UIImage *)image: (int) x :(int) y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
//UInt8 red = data[pixelInfo]; // If you need this info, enable it
//UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
//UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it
UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game
CFRelease(pixelData);
//UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info
if (alpha) return YES;
else return NO;
}

Resources