Draw on UIImage to erase and see through, revealing the UIImage layer underneath [duplicate] - ios

This question already has an answer here:
How to erase piece of UIImageView with png-brush and UIBezierPath
(1 answer)
Closed 9 years ago.
I spent the last few days trying to find a way, but I'm stuck.
At a certain point of my application I would like two UIImages overlaid, and when I start "drawing" on the first layer it would erase the image and let us see through, to be able to reveal the content underneath.
I don't know my ways in core graphics, and after spending days on the net I'm wondering if it is possible.
Is there anyone who could help me, or point me to the direction I should follow ?
Any help would be very appreciated.
Have a nice day.

Create one ScratchView class and put your Scratch image in initMethod like:-
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
scratchable = [UIImage imageNamed:#"scratchable.jpg"].CGImage;
width = CGImageGetWidth(scratchable);
height = CGImageGetHeight(scratchable);
self.opaque = NO;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CFMutableDataRef pixels = CFDataCreateMutable( NULL , width * height );
alphaPixels = CGBitmapContextCreate( CFDataGetMutableBytePtr( pixels ) , width , height , 8 , width , colorspace , kCGImageAlphaNone );
provider = CGDataProviderCreateWithCFData(pixels);
CGContextSetFillColorWithColor(alphaPixels, [UIColor blackColor].CGColor);
CGContextFillRect(alphaPixels, frame);
CGContextSetStrokeColorWithColor(alphaPixels, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(alphaPixels, 20.0);
CGContextSetLineCap(alphaPixels, kCGLineCapRound);
CGImageRef mask = CGImageMaskCreate(width, height, 8, 8, width, provider, nil, NO);
scratched = CGImageCreateWithMask(scratchable, mask);
CGImageRelease(mask);
CGColorSpaceRelease(colorspace);
}
return self;
}
and also create second view class for display image of background after Scratch
Bellow this example is much useful to you try with this:-
https://github.com/oyiptong/CGScratch

Related

How to change colour of individual pixel of UIImage/UIImageView

I have a UIImageView to which I have applied the filter:
testImageView.layer.magnificationFilter = kCAFilterNearest;
So that the individual pixels are visible. This UIImageView is within a UIScrollView, and the image itself is 1000x1000. I have used the following code to detect which pixel has been tapped:
I first set up a tap gesture recognizer:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(#"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
I would like to be able to tap a pixel, and have its colour change. However, none of the StackOverflow posts I have found have answers which work or are not outdated. For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped.
All help is appreciated.
Edit for originaluser2:
After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
The code clearly works, as demonstrated by me testing it on my phone. However, the same code has produced a few issues in my own project. Though I have the suspicion that they are all caused by one or two simple central issues. How can I solve these errors?
You'll want to break this problem up into multiple steps.
Get the coordinates of the touched point in the image coordinate system
Get the x and y position of the pixel to change
Create a bitmap context and replace the given pixel's components with your new color's components.
First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.
#interface UIImageView (PointConversionCatagory)
#property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
#property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
#end
#implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
#end
There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.
Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
#interface UIImage (UIImagePixelManipulationCatagory)
#end
#implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
#end
What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.
The important bit of this method is:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.
Finally, we get out an image from the bitmap context and perform some cleanup.
Finished Result:
Full Project: https://github.com/hamishknight/Pixel-Color-Changing
You could try something like the following:
UIImage *originalImage = [UIImage imageNamed:#"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

png image is not displaying after "reDraw"

I am using custom png images for items of .tabBarItem of my UITabBarController.
But my png images are too big (64x64), so I use the method below to redraw the image in a smaller rect (for example, make the size parameter (25,25) ).
-(UIImage*) getSmallImage:(UIImage*)image inSize: (CGSize)size
{
CGSize originalImageSize = image.size;
CGRect newRect = CGRectMake(0, 0, size.width, size.height);
float ratio = MAX(newRect.size.width/originalImageSize.width,
newRect.size.height/originalImageSize.height);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
[path addClip];
CGRect projectRect;
projectRect.size.width = ratio * originalImageSize.width;
projectRect.size.height = ratio * originalImageSize.height;
//projectRect.origin.x = (newRect.size.width - projectRect.size.width) / 2.0;
//projectRect.origin.y = (newRect.size.height - projectRect.size.height) / 2.0;
// Draw the image on it
[image drawInRect:projectRect];
// Get the image from the image context
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup image context resources
UIGraphicsEndImageContext();
return smallImage;
}
Every image I use was returned by this method. Everything was fine on simulators, but those images were not displaying when I test them on my iphone.
But if I abandon the method above and import the image directly like this: self.tabBarItem.image = [UIImage imageNamed:#"Input"]; Then the images were correctly shown on my phone, but only too big.
How can I fix this problem?
I'll answer this question by myself.
After hours of debugging, here is the problem:
in the method given above, originproperty of CGRect projectRectwas not set.
After I set both origin.x & origin.y to 0, everything worked out.
Tip: every time you meet a WTF problem, be patient and try to test your code in different ways. 'Cause in 99.9% of this kind of cases, there is something wrong with your code in stead of a bug of Xcode.
Though I still don't know why the code in my question works well in simulators, I'll let it go because I guess someday when I become an expert, this kind of question would be easy as well as silly.

Custom UIProgressView with image

I’m trying to make a custom UIProgressView where the image that gets filled up is the Nike Swoosh. I’ve tried to follow some tutorials but am getting nowhere.
My current approach:
Make the inside of swoosh transparent and surroundings black.
Then put a big UIProgressView behind that.
Since the middle of the swoosh is transparent, it looks like the swoosh is filling up.
But, modifying the height of the progress bar has proven to be a pain since it messes with the width in a weird way…and it’s hard to align the swoosh with the progress bar for responsiveness.
Are there any other ideas or libraries out there?
Thanks
My suggestion:
draw a second image programatically to apply as a mask against your 'swoosh' image, and then repeat this cyclically.
example, fill image from left (mask it from right)
{
//existing variables
IBOutlet UIImageView *swooshView;
}
-(UIImage *)maskImageOfSize:(CGSize )size filledTo:(CGFloat )percentage{
UIGraphicsBeginImageContextWithOptions ( size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColourWithColour (context, [UIColor blackColor].CGColor);
CGRect fillRect = CGRectZero;
fillRect.size.height = size.height;
fillRect.size.width = size.width * percentage / 100.0;
fillRect.origin.x = (size.width - fillRect.size.width);
CGContextFillRect(context, fillRect);
UIImage *result = UIGraphicsGetImageFromImageContext();
UIGraphicsEndImageContext();
return result;
}
-(void)fillSwooshToPercentage:(CGFloat )percentage{
percentage = ((CGFloat ) fmaxf ( 0.0 , (fminf (100.0, (float) percentage ) ) );
// just policing a 'floor' and 'ceiling'...
swooshView.layer.mask = [self maskImageOfSize:self.swoosh.bounds.size filledTo:percentage];
}

Color of all the pixels in the screen

I want to know the color of all the pixel and want to return an array of it. This is how I am doing it so far:
- (NSMutableArray *) colorOfPointinArray{
NSMutableArray *array_of_colors=[[NSMutableArray alloc] init];
unsigned char pixel[4]={0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
for (int x_axis=0; x_axis<screenWidth; x_axis++)
{
for (int y_axis=0; y_axis<screenHeight; y_axis++)
{
CGContextTranslateCTM(context, -x_axis, -y_axis);
[self.layer renderInContext:context];
UIColor *color = [UIColor colorWithRed:pixel[0]/255.0 green:pixel[1]/255.0 blue:pixel[2]/255.0 alpha:pixel[3]/255.0];
[array_of_colors addObject:color];
}
}
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return array_of_colors;
}
Now, this is taking so much time and freezes the app. I think its because of the 2 for-loops I have added. How can I improve this ?
You're creating a 1x1 pixel context, and then rendering the image into that one pixel h*w times. No wonder it's taking forever! Instead, create a context that's the same size as the layer, and then render into that context just once. Then loop through the resulting pixels and keep the color values. This may still not be instantaneous; depending on the size of the layer, turning every pixel into a UIColor could still take awhile (and some nontrivial memory) but that'll be about as quick as you can get it in the general case if you really want output in that form.
This is similar to the problem of sampling a pixel color value from an image. There are tons of posts about that. This one has some nice examples: How to get the RGB values for a pixel on an image on the iphone

Tinting UIImage to a different color, OR, generating UIImage from vector

I have a circle with a black outline, and a white fill that I need to programmatically make the white into another color (via a UIColor). I've tried a handful of other stackoverflow solutions but none of them seem to work correctly, either filling just the outside or an outline.
I have two ways I could do this but I am unsure of how I would get the right results:
Tint just the white color to whatever the UIColor should be,
or,
Make a UIImage from two circles, one being filled and one overlapping that with black.
If you decide to use two circles, one white and one black, then you may find this helpful. This method will tint a uiimage one for you but it addresses the problem of only tinting the opaque part, meaning it will only tint the circle if you provide a png with transparency around the circle. So instead of filling the entire 24x24 frame of the image it fills only the opaque parts. This isn't exactly your question but you'll probably come across this problem if you go with the second option you listed.
-(UIImage*)colorAnImage:(UIColor*)color :(UIImage*)image{
CGRect rect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, image.scale);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:rect];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Extend a UIView and just implement the drawRect method. For example, this will draw a green circle with a black outline.
- (void)drawRect:(CGRect)rect {
[super drawRect:rect];
CGContextRef gc = UIGraphicsGetCurrentContext();
[[UIColor greenColor] setFill];
CGContextFillEllipseInRect(gc, CGRectMake(0,0,24,24));
[[UIColor blackColor] set];
CGContextStrokeEllipseInRect(gc, CGRectMake(0,0,24,24));
}
For such simple shapes, just use CoreGraphics to draw a square and a circle -- adding the ability to set the fill color in your implementation.
If it's just black and white - then altering the white to another color is not so difficult when you know the color representations. Unfortunately, this is more complex to write and execute so… my recommendation is to just go straight to CoreGraphics for the simple task you outlined (bad pun, sorry).
here's a quick demo:
static void InsetRect(CGRect* const pRect, const CGFloat pAmount) {
const CGFloat halfAmount = pAmount * 0.5f;
*pRect = CGRectMake(pRect->origin.x + halfAmount, pRect->origin.y + halfAmount, pRect->size.width - pAmount, pRect->size.height - pAmount);
}
static void DrawBorderedCircleWithWidthInContext(const CGRect pRect, const CGFloat pWidth, CGContextRef pContext) {
CGContextSetLineWidth(pContext, pWidth);
CGContextSetShouldAntialias(pContext, true);
CGRect r = pRect;
/* draw circle's border */
CGContextSetRGBStrokeColor(pContext, 0.8f, 0.7f, 0, 1);
InsetRect(&r, pWidth);
CGContextStrokeEllipseInRect(pContext, r);
/* draw circle's fill */
CGContextSetRGBFillColor(pContext, 0, 0, 0.3f, 1);
InsetRect(&r, pWidth);
CGContextFillEllipseInRect(pContext, r);
}

Resources