meaning of R, S, Da in kCGBlendMode values - ios

I'm having trouble grasping the meaning of R = 0, R = S, R = S*Da, defined in kCGBlendMode values such as kCGBlendModeClear, kCGBlendModeCopy, kCGBlendModeSourceIn. So, to what do these symbols refer?

R=0 means that the result color will just be 0, meaning it will be cleared.
R=S means the result color is the same as the source color
R=S*Da means the result is the source color times the alpha value of the destination
If you take a look at the documentation and scroll down you will see their meaning listed:
The blend mode constants introduced in OS X v10.5 represent the Porter-Duff blend modes (a little explanation how they work). The symbols in the equations for these blend modes are:
R is the premultiplied result
S is the source color, and includes alpha
D is the destination color, and includes alpha
Ra, Sa, and Da are the alpha components of R, S, and D
If you furthermore take a look at Setting blend modes you can see most of the blend modes applied and what their result may look like.

Related

What format should I use for a Opencv image if I need to access the underlaying data?

I've made a program that creates images using OpenCL and in the OpenCL code I have to access the underlaying data of the opencv-image and modify it directly but I don't know how the data is arranged internally.
I'm currently using CV_8U because the representation is really simple 0 black 255 white and everything in between but I want to add color and I don't know what format to use.
This is how I currently modify the image A[y*width + x] = 255;.
Since your A[y*width + x] = 255; works fine, then the underlaying image data A must be a 1D pixel array of size width * height, each element is a cv_8u (8 bit unsigned int).
The color values of a pixel, in the case of OpenCV, will be arranged B G R in memory. RGB order would be more common but OpenCV likes them BGR.
Your data ought to be CV_8UC3, which is the case if you use imread or VideoCapture. if it isn't that, the following information needs to be interpreted accordingly.
Your array index math needs to expand to account for the data's layout:
[(y*width + x)*3 + channel]
3 because 3 channels. channel is 0..2, x and y as you expect.
As mentioned in other answers, you'd need to convert this single-channel image to a 3-channel image to have color. The 3 channels are Blue, Green, Red (BGR).
OpenCV has a method that does just this, cv2.cvtColor(), this method takes an input image (in this case the single channel image that you have), and a conversion code (see here for more).
So the code would be like the following:
color_image = cv2.cvtColor(source_image, cv2.COLOR_GRAY2BGR)
Then you can modify the color by accessing each of the color channels, e.g.
color_image[y, x, 0] = 255 # this changes the first channel (Blue)

how to differentiate the same color objects with different intensity?

There are 3 leather pieces of Brown color.Among them one is dark brown in color.I have to highlight that odd piece(dark brown piece).The procedure which I followed in my code is:
converted given image to HSV.
checked the saturation range(S value) as well as the brightness range (V value) of the odd piece and other two light intensity pieces.
But the problem is, The odd piece values(both s and v value ranges) overlaps with the other two pieces(light intensity) values.
So,which color model best suits for this problem ?
If, Illumination changes the values gets changed again,How should I tackle this problem?
What type of Camera should I use ?

Automatic approach for removing colord object shadow on white background?

I am working on some leaf images using OpenCV (Java). The leaves are captured on a white paper and some has shadows like this one:
Of course, it's somehow the extreme case (there are milder shadows).
Now, I want to threshold the leaf and also remove the shadow (while reserving the leaf's details).
My current flow is this:
1) Converting to HSV and extracting the Saturation channel:
Imgproc.cvtColor(colorMat, colorMat, Imgproc.COLOR_RGB2HSV);
ArrayList<Mat> channels = new ArrayList<Mat>();
Core.split(colorMat, channels);
satImg = channels.get(1);
2) De-noising (median) and applying adaptiveThreshold:
Imgproc.medianBlur(satImg , satImg , 11);
Imgproc.adaptiveThreshold(satImg , satImg , 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 401, -10);
And the result is this:
It looks OK, but the shadow is causing some anomalies along the left boundary. Also, I have this feeling that I am not using the white background to my benefit.
Now, I have 2 questions:
1) How can I improve the result and get rid of the shadow?
2) Can I get good results without working on saturation channel?. The reason I ask is that on most of my images, working on L channel (from HLS) gives way better results (apart from the shadow, of course).
Update: Using the Hue channel makes threshdolding better, but makes the shadow situation worse:
Update2: In some cases, the assumption that the shadow is darker than the leaf doesn't always hold. So, working on intensities won't help. I'm looking more toward a color channels approach.
I don't use opencv, instead I was trying to use matlab image processing toolbox to extract the leaf. Hopefully opencv has all the processing functions for you. Please see my result below. I did all the operations in your original image channel 3 and channel 1.
First I used your channel 3, threshold it with 100 (left top). Then I remove the regions on the border and regions with the pixel size smaller than 100, filling in the hole in the leaf, the result is shown in right top.
Next I used your channel 1, did the same thing as I did in channel 3, the result is shown in left bottom. Then I found out the connected regions (there are only two as you can see in the left bottom figure), remove the one with smaller area (shown in right bottom).
Suppose the right top image is I1, and the right bottom image is I, the leaf is extracted by implement ~I && I1. The leaf is:
Hope it helps. Thanks
I tried two different things:
1. other thresholding on the saturation channel
2. try to find two contours: shadow and leaf
I use c++ so your code snippets will look a little different.
trying otsu-thresholding instead of adaptive thresholding:
cv::threshold(hsv_imgs,mask,0,255,CV_THRESH_BINARY|CV_THRESH_OTSU);
leading to following images (just OTSU thresholding on saturation channel):
the other thing is computing gradient information (i used sobel, see oppenCV documentation), thresholding that and after an opening-operator I used findContours giving something like this, not useable yet (gradient contour approach):
I'm trying to do the same thing with photos of butterflies, but with more uneven and unpredictable backgrounds such as this. Once you've identified a good portion of the background (e.g. via thresholding, or as we do, flood filling from random points), what works well is to use the GrabCut algorithm to get all those bits you might miss on the initial pass. In python, assuming you still want to identify an initial area of background by thresholding on the saturation channel, try something like
import cv2
import numpy as np
img = cv2.imread("leaf.jpg")
sat = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,1]
sat = cv2.medianBlur(sat, 11)
thresh = cv2.adaptiveThreshold(sat , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh.jpg", thresh)
h, w = img.shape[:2]
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
grabcut_mask = thresh/255*3 #background should be 0, probable foreground = 3
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("GrabCut1.jpg", img*grabcut_mask[...,None])
This actually gets rid of the shadows for you in this case, because the edge of the shadow actually has high saturation levels, so is included in the grab cut deletion. (I would post images, but don't have enough reputation)
Usually, however, you can't trust shadows to be included in the background detection. In this case you probably want to compare areas in the image with colour of the now-known background using the chromacity distortion measure proposed by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection". This measure takes account of the fact that for desaturated colours, hue is not a relevant measure.
Note that the pdf of the preprint you find online has a mistake (no + signs) in equation 6. You can use the version re-quoted in Rodriguez-Gomez et al (2012), equations 1 & 2. Or you can use my python code below:
def brightness_distortion(I, mu, sigma):
return np.sum(I*mu/sigma**2, axis=-1) / np.sum((mu/sigma)**2, axis=-1)
def chromacity_distortion(I, mu, sigma):
alpha = brightness_distortion(I, mu, sigma)[...,None]
return np.sqrt(np.sum(((I - alpha * mu)/sigma)**2, axis=-1))
You can feed the known background mean & stdev as the last two parameters of the chromacity_distortion function, and the RGB pixel image as the first parameter, which should show you that the shadow is basically the same chromacity as the background, and very different from the leaf. In the code below, I've then thresholded on chromacity, and done another grabcut pass. This works to remove the shadow even if the first grabcut pass doesn't (e.g. if you originally thresholded on hue)
mean, stdev = cv2.meanStdDev(img, mask = 255-thresh)
mean = mean.ravel() #bizarrely, meanStdDev returns an array of size [3,1], not [3], so flatten it
stdev = stdev.ravel()
chrom = chromacity_distortion(img, mean, stdev)
chrom255 = cv2.normalize(chrom, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
cv2.imwrite("ChromacityDistortionFromBackground.jpg", chrom255)
thresh2 = cv2.adaptiveThreshold(chrom255 , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh2.jpg", thresh2)
grabcut_mask[...] = 3
grabcut_mask[thresh==0] = 0 #where thresh == 0, definitely background, set to 0
grabcut_mask[np.logical_and(thresh == 255, thresh2 == 0)] = 2 #could try setting this to 2 or 0
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("final_leaf.jpg", grabcut_mask[...,None]*img)
I'm afraid with the parameters I tried, this still removes the stalk, though. I think that's because GrabCut thinks that it looks a similar colour to the shadows. Let me know if you find a way to keep it.

iOS White point/white balance adjustment examples/suggestions

I am trying to change the white point/white balance programmatically. This is what I want to accomplish:
- Choose a (random) pixel from the image
- Get color of that pixel
- Transform the image so that all pixels of that color will be transformed to white and all other colors shifted to match
I have accomplished the first two steps but the third step is not really working out.
At first I thought that, as per Apples documentation CIWhitePointAdjust should be the thing to accomplish exactly that but, although it does change the image it is not doing what I would like/expect it to do.
Then it seemed that CIColorMatrix should be something that would help me to shift the colors but I was (and still am) at a loss of what to input to it with those pesky vectors.
I have tried almost everything (same RGB values on all vectors, corresponding values (R for R, etc.) on each vector, 1 - corresponding value, 1 + corresponding value, 1/corresponding value. RGB values and different (1 - x, 1 + x, 1 / x).
I have also come across CITemperatureAndTint that, as per Apples documentation should also help, but I have not yet figured out how to convert from RGB to temperature and tint. I have seen algorithms and formulas about converting from RGB to Temperatur, but nothing regarding tint. I will continue experimenting with this a little though.
Any help much appreciated!
After a lot of experimenting and mathematics I finally got my app to work almost the way I want.
If anyone else will find themselves facing a similar problem then here is what I did.
I ended up using CITemperatureAndTint filter supplying a color in Kelvins calculated from the selected pixels RGB value and user suppliable tint value.
To get to Kelvins I:
- firstly converted RGB to XYZ using the D65 illuminant (ie Daylight).
- then converted from XYZ to Yxy. Both of these conversions were made using the algorithms found from EasyRGB.
- I then calculated Kelvins from Yxy using the McCamry's formula I found in a paper here.
These steps got the image in the ballpark but not quite there, so I added a UISlider for the user to supply the tint value ranging from -100 to 100.
With selecting a point that should be white and choosing values from the positive side of the tint scale (all the images I on my phone tend to be more yellow) an image can now be converted to (more) neutral colors. Yey!
I supplyed the calculated temperature and user chosen tint as inputNeutral vector values.
6500 (D65 daylight) and 0 as inputTargetNeutral vector values to CITTemperatureAndTint filter.

Changing Colours Dynamically in AS1

I can't get this to work in my AS1 application. I am using the Color.setTransform method.
Am I correct in thinking the following object creation should result in transforming a colour to white?
var AColorTransform = {ra:100, rb:255, ga:100, gb:255, ba:100, bb:255, aa:100, ab:255};
And this one to black?
AColorTransform = {ra:100, rb:-255, ga:100, gb:-255, ba:100, bb:-255, aa:100, ab:-255};
I read on some websites that calling setRGB or setTransform may not result in actually changing the display colour when the object you're performing the operation on has some kind of dynamic behaviour. Does anyone know more about these situations? And how to change the colour under all circumstances?
Regards.
Been a long time since I've had to do anything is AS1, but I'll do my best.
The basic code for a color.setTransform() looks like this...
var AColorTransform = {ra:100, rb:255, ga:100, gb:255, ba:100, bb:255, aa:100, ab:255};
var myColor = new Color(mc);
myColor.setTransform(AColorTransform);
...where mc is a MovieClip on the stage somewhere.
Remember that you're asking about transform, which by its nature is intended to transform colors from what they are to something else. If you want to reliably paint in a specific color (such as black or white), you're usually far better off using setRGB, which would look like this:
var myColor = new Color(mc);
//set to black
myColor.setRGB(0x000000);
//or set to white
myColor.setRGB(0xFFFFFF);
These work reliably, though there can be some gotchas. Generally, just remember that the color is attached to the specific MovieClip...so if that MovieClip falls out of scope (ie, it disappears from the timeline) your color will be deleted with it.
Read further only if you want to understand color transform better:
Let's look at the components of that color transform.
a (multiplier 0 > 100%) b(offset -255 > 255)
r ra rb
g ga gb
b ba bb
a aa bb
There are four channels (r, g, b, and a). The first three are for red, green and blue, and the last one for alpha (transparency). Each channel has an 'a' component and a 'b' component, thus ra, rb, ga, gb, etc. The 'a' component is a percentage multiplier. That is, it will multiply any existing channel by the percent in that value. The 'b' component is an offset. So 'ra' multiplies the existing red channel. 'rb' offsets it. If your red channel starts as 'FF' (full on red), setting ra:100 will have no effect, since multiplying FF by 100% results in no change. Similarly, if red starts at '00' (no red at all), no value of 'ra' will have any effect, since (if you recall your Shakespeare) twice nothing is still nothing. Things in-between will multiply as you'd expect.
Offsets are added after multiplication. So you can multiply by some value, then offset it:
r (result red color) = (RR * ra%) + rb
g (result green color) = (GG * ga%) + gb
b (result blue color) = (BB * ba%) + bb
a (result alpha) = (AA * aa%) + ab
example: RR = 128 (hex 0x80), ra = 50 (50% or .5), rb = -20
resulting red channel: (128 * .5) + (-20) = 44 (hex 0x2C)
Frankly, this all gets so confusing that I tend to prefer the simple sanity of avoiding transforms altogether and go with the much simpler setRGB().

Resources