Finding colour of specific pixel on UIImageView - ios

I am working on a paint application and want to build the undo manager. I store the pixel co-ordinate value of each location where the user draws but I also want to store the old pixel colour of the point where the user has drawn so that I can undo it to appropriate colour. But I am unable to do so. Can anybody help.
Here is the code I am currently using to get the pixel colour:
UIGraphicsBeginImageContext(self.tempDrawImage.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
int bpr = CGBitmapContextGetBytesPerRow(context);
unsigned char * data = CGBitmapContextGetData(context);
if (data!=NULL)
{
int offset = bpr*(lastPoint.y)+ 4*(lastPoint.x);
NSLog(#"Red : %d",data[offset+0]);
NSLog(#"Green : %d",data[offset+1]);
NSLog(#"Blue : %d",data[offset+2]);
}

Best solution for this to use UIImage+ColorAtPixel category class of UIImage. I have used this, very nice and perfect.

You can create the copy of your image every time before image drawing modifications begin (for example just before a user taps on the screen).

Related

How to get a color of a pixel in the image using p5.js and use it as a fill color?

I'm learning p5.js. I've tried the following code to draw a circle each time I move the mouse with fill color that changes according to color of an image.
let img;
function setup() {
createCanvas(400, 400);
loadImage('https://upload.wikimedia.org/wikipedia/commons/e/ef/Hayao_Miyazaki.jpg', img => {
image(img, 0, 0);
});
noStroke();
}
function draw() {
let c = get(mouseX, mouseY);
fill(c);
circle(mouseX, mouseY, 30);
}
But it seems to take the color from the canvas, not from an image. Because of that, if you don't move your mouse fast enough the color doesn't change at all, and even if you do, the amount of color is much more limited, in other words it's not what I intended.
I can get the colors right if I put the loadImage() part inside of a draw function, but then only one circle at a time is visible.
May be I should store every pixel of an image in the array and get the values from an array, without using get()? Is it possible?
I think I'm missing something simple, please help.
img.get(mouseX, mouseY) to get values from image, not a whole canvas
I too though that img.get(mouseX, mouseY); would work and #mevfy-y also said it so it might work?!

Curtain revealing view

I am trying to achieve the following challenging effect:
I want to move the white "curtain" down in order to reveal the red box.
(Note: in the screenshots below the curtain is white and the background is grey)
The problem is in the view hierarchy.
In order for the box to stay hidden in the initial position, it has to be placed behind the curtain, but in order to be shown in the final position, it has to be on top of the curtain.
How can I "cheat" and make it seem like the curtain really reveals the box with a smooth animation?
Thanks!
You need 2 images and a mask. Your fully obscured gray area and your box with white background. The image for your curtain is only a mask of the bottom edge. This is so it can draw the bottom fringe for the curtain and not obliterate the gray overlapping region.
Set a starting position at the top, each frame:
Draw/copy only the size of the curtain mask, copying the corresponding red box region through the curtain mask.
Move the starting position down one scan line and wait for the next frame. Repeat until done.
Essentially, there is no white curtain, only what is revealed of the "hidden" image which contains white background for the box. Depending on how you're drawing, your mask image could be another image with an alpha channel.
Edit: As requested, some example code. However, it is very possible that whatever you are using to get graphics on the screen already has draw routines with masking and your would be better off using that.
This snippet is untested but should provide the logic and work pretty much anywhere. I'm not familiar with iOS and have no idea what format your image pixels are, 24 bit, 32 bit, etc. and use "PixelType" as a substitute.
This also assumes the white curtain edge with a black background was made as an 8 bit image in a paint program, and black is zero and white anything else. It should be the same width as both of the other images and only as tall as needed for the curtain edge.
struct Mask
{
char *mData; // set this to the image data of your 8 bit mask
int mWidth; // width in pixels, should be the same as your 2 images
int mHeight; // height in pixels of the mask
};
int iRevealPos = 0; // increment each frame value until box is revealed.
// Hopefully, your pixel type is a basic type like byte, short or int.
void Reveal(PixelType *foreground, PixelType *background, Mask *mask)
{
int height = (iRevealPos < mask->mHeight) ? iRevealPos : mask->mHeight; // account for initial slide in
PixelType *src = background + (iRevealPos * mask->mWidth); // background box at current reveal position
PixelType *dst = foreground + (iRevealPos * mask->mWidth); // matching foreground screen position
int count = mask->mWidth * height;
char *filter = mask->mData;
if ((iRevealPos < mask->mHeight)) // adjust for initial slide in
filter += (mask->mHeight - iRevealPos) * mask->mWidth;
while (count--)
{
if (*filter++) // not black?
*dst++ = *src++; // copy the box image
else // skip this pixel
{
src++;
dst++;
}
}
// if you create your mask with a solid white line at the top, you don't need this
if (iRevealPos > mask->mHeight) // fixup, so the mask doesn't leave a trail
{
src = background + ((iRevealPos-1) * mask->mWidth);
dst = foreground + ((iRevealPos-1) * mask->mWidth);
count = mask->mWidth;
while (count--)
*dst++ = *src++;
}
iRevealPos++; // bump position for next time
}
If you create your mask with a solid white line or 2 at the top you don't need the second loop which fixes up any trail the mask leaves behind. I also allowed for the curtain to slide in rather than fully pop in at the start. This is untested so I may have got the adjustments for this wrong.

Swift Progress Indicator Image Mask

To start, this project has been built using Swift.
I want to create a custom progress indicator that "fills up" as the script runs. The script will call a JSON feed that is pulled from the remote server.
To better visualize what I'm after, I made this:
My guess would be to have two PNG images; one white and one red, and then simply do some masking based on the progress amount.
Any thoughts on this?
Masking is probably overkill for this. Just redraw the image each time. When you do, you draw the red rectangle to fill the lower half of the drawing, to whatever height you want it; then you draw the droplet image (a PNG), which has transparency in the middle so the red rectangle shows through. So, one PNG is enough because the red rectangle can be drawn "live" each time you redraw.
I liked your drawing so much that I wanted to bring it to life, so here's my working code (my PNG is called tear.png and iv is a UIImageView in my interface; percent should be a CGFloat between 0 and 1):
func redraw(percent:CGFloat) {
let tear : UIImage! = UIImage(named:"tear")!
if tear == nil {return}
let sz = tear.size
let top = sz.height*(1-percent)
UIGraphicsBeginImageContextWithOptions(sz, false, 0)
let con = UIGraphicsGetCurrentContext()
UIColor.redColor().setFill()
CGContextFillRect(con, CGRectMake(0,top,sz.width,sz.height))
tear.drawAtPoint(CGPointMake(0,0))
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
I also hooked up a UISlider whose action method converts its value to a CGFloat and calls that method, so that moving the slider back and forth moves the red fill up and down in the teardrop. I could play with this for hours!

Colorizing image ignores alpha channel — why and how to fix?

Here's what I'm trying to do: On the left is a generic, uncolorized RGBA image that I've created off-screen and cached for speed (it's very slow to create initially, but very fast to colorize with any color later, as needed). It's a square image with a circular swirl. Inside the circle, the image has an alpha/opacity of 1. Outside the circle, it has an alpha/opacity of 0. I've displayed it here inside a UIView with a background color of [UIColor scrollViewTexturedBackgroundColor]. On the right is what happens when I attempt to colorize the image by filling a solid red rectangle over the top of it after setting CGContextSetBlendMode(context, kCGBlendModeColor).
That's not what I want, nor what I expected. Evidently, colorizing a completely transparent pixel (e.g., alpha value of 0) results in the full-on fill color for some strange reason, rather than remaining transparent as I would have expected.
What I want is actually this:
Now, in this particular case, I can set the clipping region to a circle, so that the area outside the circle remains untouched — and that's what I've done here as a workaround.
But in my app, I also need to be able to colorize arbitrary shapes where I don't know the clipping/outline path. One example is colorizing white text by overlaying a gradient. How is this done? I suspect there must be some way to do it efficiently — and generally, with no weird path/clipping tricks — using image masks... but I have yet to find a tutorial on this. Obviously it's possible because I've seen colored-gradient text in other games.
Incidentally, what I can't do is start with a gradient and clip/clear away parts I don't need — because (as shown in the example above) my uncolorized source images are, in general, grayscale rather than pure white. So I really need to start with the uncolorized image and then colorize it.
p.s. — kCGBlendModeMultiply also has the same flaws / shortcomings / idiosyncrasies when it comes to colorizing partially transparent images. Does anyone know why Apple decided to do it that way? It's as if the Quartz colorizing code treats RGBA(0,0,0,0) as RGBA(0,0,0,1), i.e., it completely ignores and destroys the alpha channel.
One approach that you can take that will work is to construct a mask from the original image and then invoke the CGContextClipToMask() method before rendering your image with the multiply blend mode set. Here is the CoreGraphics code that would set the mask before drawing the image to color.
CGContextRef context = [frameBuffer createBitmapContext];
CGRect bounds = CGRectMake( 0.0f, 0.0f, width, height );
CGContextClipToMask(context, bounds, maskImage.CGImage);
CGContextDrawImage(context, bounds, greyImage.CGImage);
The slightly more tricky part will be to take the original image and generate a maskImage. What you can do for that is write a loop that will examine each pixel and write either a black or white pixel as the mask value. If the original pixel in the image to color is completely transparent, then write a black pixel, otherwise write a white pixel. Note that the mask value will be a 24BPP image. Here is some code to give you the right idea.
uint32_t *inPixels = (uint32_t*) MEMORY_ADDR_OF_ORIGINAL_IMAGE;
uint32_t *maskPixels = malloc(numPixels * sizeof(uint32_t));
uint32_t *maskPixelsPtr = maskPixels;
for (int rowi = 0; rowi < height; rowi++) {
for (int coli = 0; coli < width; coli++) {
uint32_t inPixel = *inPixels++;
uint32_t inAlpha = (inPixel >> 24) & 0xFF;
uint32_t cval = 0;
if (inAlpha != 0) {
cval = 0xFF;
}
uint32_t outPixel = (0xFF << 24) | (cval << 16) | (cval << 8) | cval;
*maskPixelsPtr++ = outPixel;
}
}
You will of course need to fill in all the details and create the graphics contexts and so on. But the general idea is to simply create your own mask to filter out drawing of the red parts around the outside of the circle.

how to optimized this image processing replace all pixels on image with closest available RGB?

Im' trying to replace all pixels of input image with closest available RGB. I have a array contain color and input image. Here is my code, it give me an output image as expected, BUT it take very LONG time( about a min) to process one image. Can anybody help me improve the code? Or if you have any other suggestions, please help.
UIGraphicsBeginImageContextWithOptions(CGSizeMake(CGImageGetWidth(sourceImage),CGImageGetHeight(sourceImage)), NO, 0.0f);
//Context size I keep as same as original input image size
//Otherwise, the output will be only a partial image
CGContextRef context;
context = UIGraphicsGetCurrentContext();
//This is for flipping up sidedown
CGContextTranslateCTM(context, 0, self.imageViewArea.image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// init vars
float d = 0; // squared error
int idx = 0; // index of palette color
int min = 1000000; // min difference
UIColor *oneRGB; // color at a pixel
UIColor *paletteRGB; // palette color
// visit each output color and determine closest color from palette
for(int y=0; y<sizeY; y++) {
for(int x=0; x<sizeX; x++) {
// desired (avg) color is one pixel of scaled image
oneRGB = [inputImgAvg colorAtPixel:CGPointMake(x,y)];
// find closest color match in palette: init idx with index
// of closest match; keep track of min to find idx
min = 1000000;
idx = 0;
CGContextDrawImage(context,CGRectMake(xx, yy, 1, 1),img);
}
}
UIImage *output = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageViewArea.image = output;
This is a similar question (with no definitive answer), but the answer there has the code for directly accessing pixels from an image.
Quantize Image, Save List of Remaining Colors
You should do that rather than use CG functions for each get and set pixel. Drawing 1 pixel of an image onto another image is a lot slower than changing 3 bytes in a array.
Also, what's in ColorDiff -- you don't need perfect diffing as long as the closest pixel has the smallest diff. There may be room for pre-processing this list so that for each palette entry you have the smallest diff to the nearest other palette entry. Then, while looping through pixels, I can quickly check to see if the next pixel is within half that distance to the color just found (because photos tend to have common colors near each other).
If that's not a match, then while looping through the palette, if I am within half this distance to any entry, there is no need to check further.
Basically, this puts a zone around each palette entry where you know for sure that this one is the closest.
The usual answer is to use a k-d tree or some other Octree structure to reduce the number of computations and comparisons that have to be done at each pixel.
I've also had success with partitioning the color space into a regular grid and keeping a list of possible closest matches for each part of the grid. For example you can divide the (0-255) values of R,G,B by 16 and end up with a grid of (16,16,16) or 4096 elements altogether. Best case is that there's only one member of the list for a particular grid element and no need to traverse the list at all.

Resources