Determine colour, alpha of a point in a UIImageView - ios

Is there any way to query a UIImageView to see what is at any given point? For example, user touches at a particular point, what's the colour of the pixel in the under the touch?

You could draw the image from the image view to a CGBitmapContext which your app has allocated, and then read the RGBA bytes at the appropriate offset within the bitmap, computed from the XY touch location, and bitmap rowbytes. Then compute the color as needed from the RGBA byte values.
For efficiency, you might be able to draw only a tiny sub rectangle of your image into a tiny bitmap (1x1 pixel?).

Related

Considering the Stencil in depth

I'm using Orthographic projection to draw my objects.
Each object items is being added to different buffers and being drawn in several cycles.
Let's say that each object has an outline square and fill for the square (in different color).
So i'm drawing first the all the fillings, and then the outlines.
I'm using depth buffer to make sure that the outlines will not be over all the fills as shown at the picture
Now i'm facing a problem that each object contains another drawing item on it (such as text - points) which can be longer than this squares. So i'm using the stencil buffer for cutting this additional drawing over the square. Although, when doing this there is no consideration in the depth buffer.
Meaning that one text item can be drawn over the other square. as showed below.
Is there anyway\trick to make it happen ?
You should be able to set the stencil buffer to a different value for each of the squares (provided there is <= 255 squares, as you won't be able to get a more than 8-bit stencil buffer). Configure the stencil value to KEEP for pixels that fail the depth test, causing any stencil values written by quads that are further in front but have been drawn earlier to be retained.
This will allow clipping each text individually.
Another way is to use only the depth buffer and pass the pixel extents of the current quad into the text pixel shader, where you can discard any extra pixels. This requires less state changes.

why resizableImageWithCapInsets's best performance is tiled by 1x1 rather than block by block

UIImage resizableImageWithCapInsets official document description are below.
During scaling or resizing of the image, areas covered by a cap are not scaled or resized. Instead, the pixel area not covered by the cap in each direction is tiled, left-to-right and top-to-bottom, to resize the image. This technique is often used to create variable-width buttons, which retain the same rounded corners but whose center region grows or shrinks as needed. For best performance, use a tiled area that is a 1x1 pixel area in size.
I don't understand why use 1x1 pixel tiled area is the best performance. I think tiled block by block, the performance is better than 1x1 area. In theory, block by block is fast than point by point, is that right? who can told me the implementation of this in machine?
#jhabbott makes a good guess in his comment on the accepted answer to the question How does UIEdgeInsetsMake work?
So, I think if the tiled area is just 1x1 pixel. Then, resizableImageWithCapInsets: can just use that pixel's color as the fill color. That way, it doesn't have to do any tiling at all. So, essentially, it's like setting view.backgroundColor = color. Have you ever written any drawing code? Basically, I think filling an area with a color is easier than tiling that area with a rectangle of pixels, since the latter probably takes more calculations, like where to position the next tile, etc. But, I'm just guessing here. But, if you try to write the drawing code to fill a rect with a color vs to tile a rect of pixels onto another rect, you'll see where I'm coming from.

Preserve color of transparent pixels using CGContextDrawimage?

I'm loading image data from a TIFF stored on disk in to a buffer which I subsequently use to create an OpenGL texture. I'm getting at the data by writing to a CGContext. The original image is 100% white on every single pixel. The only thing that changes from one pixel to the next is the alpha value.
When I write to the CGContext, the color of the transparent pixels isn't preserved. "Why do you care about the color of transparent pixels" you ask? When the image is scaled, the color of the transparent pixels can become visible, creating ugly dark outline artifacts.
I've tried reading the data directly from the CGImage into a buffer and using that buffer to create my texture (using CGImageGetDataProvider(image)), but this only works in cases where the color space of the CGImage is RGBA. Presumably, CGContextDrawimage handles converting from one color space to another.
Is there any way I can tell CGContextDrawimage to preserve the color of transparent pixels? Or am I going to have to load my images some other way?

How to overlay an picture with a given mask

I want to overlay an image in a given image. I have created a mask with an area, where I can put this picture:
Image Hosted by ImageShack.us http://img560.imageshack.us/img560/1381/roih.jpg
The problem is, that the white area contains a black area, where I can't put objects.
How can I calculate efficiently where the subimage must to put on? I know about some functions like PointPolygonTest. But it takes very long.
EDIT:
The overlay image must put somewhere on the white place.
For example at the place from the blue rectangle.
Image Hosted by ImageShack.us http://img513.imageshack.us/img513/5756/roi2d.jpg
If I understood correctly, you would like to put an image in a region (as big as the image) that is completely white in the mask.
In this case, in order to get valid regions, I would apply an erosion to the mask using a kernel of the same size as the image to be inserted. After erosion, all valid regions will be white.
The image you show however has no 200*200 regions that is entirely white, so I must have misunderstood...
But if you what to calculate the region with the least black in the mask, you could apply a blur instead of an erosion and look for the maximal intensity pixel in the blurred mask.
In both case you want to insert the sub-image so that its centre is on the position maximal intensity pixel of the eroded/blurred mask.
Edit:
If you are interested in finding the region that would be the most distant from any black pixel to put the sub-image, you can define its centre as the maximal value of the distance transform of the mask.
Good luck,

what is source rectangle in spritebatch.draw in xna

What is the purpose of the source rectangle parameter in the SpriteBatch.Draw() method?
MSDN says: A rectangle that specifies (in texels) the source texels from a texture. Use null to draw the entire texture.
What does that mean?
The idea of the sourceRectangle is to allow you to implement what is both a performance optimisation and an artist convenience by arranging multiple sprites into a single texture. This is known as a "Texture Atlas" or a "Sprite Sheet".
(source: andrewrussell.net)
I explain why it is a performance optimisation in this answer. Basically it lets you reduce the number of texture-swaps. (So in the case of my illustration, if you're only drawing an animated character once, using a sprite-sheet will not improve performance.)
It also lets you implement tacky 2D special effects, like having a sprite "wipe" in:
(source: andrewrussell.net)
A texel is more-or-less the same thing as a pixel in the texture (a "texture pixel", if you will). So, when you draw your sprite, you specify the top-left corner of your sprite within the texture, along with its width and height. (The same as if you selected it in an image editor.)
If you pass in null for your source rectangle, XNA will assume a source rectangle that covers the entire texture.
The origin you specify to Draw is also measured in texels from the upper-left corner of the source rectangle.
In a situation where you have a single texture that contains different frames (animated textures), you will want to specify the source rectangle, so that you can draw a single frame from a texture.
i.e.
Look at this spritesheet here
The source rectangle defines the area of the texture that will be displayed. So if you have a 40x40 texture, and your rectangle is (0, 0, 20, 20), only the top left corner of the texture will be displayed. If you specify null for the rectangle, you will draw the entire texture.
This can be helpful when drawing from a spritesheet (a collection of textures that are all put into one bigger texture), and also in image manipulation programs.

Resources