CGContextStrokePath color - ios

I use CGContextStrokePath painted on a straight line in a white background picture, stroke color is red, alpha is 1.0
After drawing the line, why the points is not (255, 0, 0), but (255, 96, 96)
Why not pure red?

Quartz (the iOS drawing layer) uses antialiasing to make things look smooth. That's why you're seeing non-pure-red pixels.
If you stroke a line of width 1.0 and you want only pure red pixels, the line needs to be horizontal or vertical and it needs to run along the center of the pixels, like this:
CGContextMoveToPoint(gc, 0, 10.5);
CGContextAddLineToPoint(gc, 50, 10.5);
CGContextStroke(gc);
The .5 in the y coordinates puts the long along the centers of the pixels.

Related

Trails effect, clearing a frame buffer with a transparent quad

I want to get a trails effect. I am drawing particles to a frame buffer. which is never cleared (accumulates draw calls). Fading out is done by drawing a black quad with small alpha, for example 0.0, 0.0, 0.0, 0.1. A two step process, repeated per frame:
- drawing a black quad
- drawing particles at new positions
All works nice, the moving particles produce long trails EXCEPT the black quad does not clear the FBO down to perfect zero. Faint trails remain forever (e.g. buffer's RGBA = 4,4,4,255).
I assume the problem starts when a blending function multiplies small values of FBO's 8bit RGBA (destination color) by, for example (1.0-0.1)=0.9 and rounding prevents further reduction. For example 4 * 0.9 = 3.6 -> rounded back to 4, for ever.
Is my method (drawing a black quad) inherently useless for trails? I cannot find a blend function that could help, since all of them multiply the DST color by some value, which must be very small to produce long trails.
The trails are drawn using a code:
GLuint drawableFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &drawableFBO);
glBindFramebuffer(GL_FRAMEBUFFER, FBO); /// has an attached texture glFramebufferTexture2D -> FBOTextureId
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, 0.0, 0.0, 0.0, 0.1);
glUniformMatrix4fv(fboClearShader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glBindVertexArray(fboClearShaderBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glUseProgram(particlesShader);
glUniformMatrix4fv(shader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glUniform1f(shader.uniforms.globalAlpha, 0.9);
glBlendFunc(GL_ONE, GL_ONE);
glBindTexture(particleTextureId);
glBindVertexArray(particlesBuffer);
glDrawArrays(GL_TRIANGLES, 0, 1000*6);
/// back to drawable buffer
glBindFramebuffer(GL_FRAMEBUFFER, drawableFBO);
glUseProgram(fullScreenShader);
glBindVertexArray(screenQuad);
glBlendFuncGL_ONE dFactor:GL_ONE];
glBindTexture(FBOTextureId);
glDrawArrays(GL_TRIANGLES, 0, 6);
Blending is not only defined by the by the blend function glBlendFunc, it is also defined by the blend equation glBlendEquation.
By default the source value and the destination values are summed up, after they are processed by the blend function.
Use a blend function which subtracts a tiny value from the destination buffer, so the destination color will slightly decreased in each frame and finally becomes 0.0.
The the results of the blend equations is clamped to the range [0, 1].
e.g.
dest_color = dest_color - RGB(0.01)
The blend equation which subtracts the source color form the destination color is GL_FUNC_REVERSE_SUBTRACT:
float dec = 0.01f; // should be at least 1.0/256.0
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_REVERSE_SUBTRACT);
glBlendFunc(GL_ONE, GL_ONE);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, dec, dec, dec, 0.0);

How coordinates points works in LinearGradient?

Can anyone please explain or describe how the coordinates points in the LinearGradient?
For example: I have my code in this way.
var gradient = new LinearGradient(0, 0, 500, 500, colors, null, Shader.TileMode.Clamp);
paint.SetShader(gradient);
paint.Dither = true;
how it display in the rectangle while applying in the rectangle ?
In Android, the coordinate system always is like what you can see above picture.
1) (0,0) is top left corner.
2) (maxX,0) is top right corner
3) (0,maxY) is bottom left corner
4) (maxX,maxY) is bottom right corner
The maxX or maxY is the screen's(or view's) max width or max height.
This new LinearGradient(0, 0, 500, 500, colors, null, Shader.TileMode.Clamp) method will be sure a Gradient line which you can see in above picture. And when you use Canvas to draw the rectangle with the paint, the colors will be rendered along this line.

How to improve Hough Circle Transform to detect a circle made up of scattered points

I have a very basic code that uses the standardized HoughCircles command in openCV to detect a circle. However, my problem is that my data (images) are generated using an algorithm (for the purpose of data simulation) that plots a point in the range of +-15% (randomly in this range) of r (where r is the radius of the circle, that has been randomly generated to be between 5 and 10 (real numbers)) and does so for 360 degrees using the equation of a circle. (Attached a sample image).
http://imgur.com/a/iIZ1N
Now using the Hough circle command, I was able to detect a circle of approximately the same radius by manually playing around with the parameters (by settings up trackbars, inspired from a github code of the same nature) but I want to automate the process as I have over a 1000 images that I want to do this over and over on. Is there a better way to do that? Or if anyone has any suggestions, I would highly appreciate them as I am a beginner in the field of image processing and have a physics background rather than a CS one.
A rough sample of my code (without trackbars etc is below):
Mat img = imread("C:\\Users\\walee\\Documents\\MATLAB\\plot_2 .jpeg", 0);
Mat cimg,copy;
copy = img;
medianBlur(img, img, 5);
GaussianBlur(img, img, Size(1, 5), 1.1, 0);
cvtColor(img, cimg, COLOR_GRAY2BGR);
vector<Vec3f> circles;
HoughCircles(img, circles, HOUGH_GRADIENT,1, 10, 94, 57, 120, 250);
for (size_t i = 0; i < circles.size(); i++)
{
Vec3i c = circles[i];
circle(cimg, Point(c[0], c[1]), c[2], Scalar(0, 0, 255), 1, LINE_AA);
circle(cimg, Point(c[0], c[1]), 2, Scalar(0, 255, 0), 1, LINE_AA);
}
imshow("detected circles", cimg);
waitKey();
return 0;
If all images have the same nature (black axis and points as circles) I would suggest to do following:
1) remove axis by finding black elements and replace them with background
2) invert colours to have black background
3) perform morphological closing to fill the circles and create more solid points
4) (optional) if the density of the points is high you can try to apply another morphological operation, namely dilation to make the data circle thinner
5) apply Hough circle

Corona sdk linear color ramp?

Ok, I have a very, very large background image, well not an image but a rectangle colored blue:
bg2 =display.newRect(0,0,20000,20000)
bg2.y=10000
bg2:setFillColor( 0 , 0, 225)
Is it possible to make the rectangle not just one solid color, but a linear color ramp from black to blue? In other words, the color fades from black to blue vertically. I don't want to use an image, because it would be way too large.
You're looking for a gradient fill color, such as this.
local black = {0, 0, 0}
local blue = {0, 0, 1}
local g = {type="gradient", color1=black, color2=blue}
bg2:setFillColor(g)

Define black region in HSV color space

I want to find boundaries of black region
http://i40.tinypic.com/2lbi9s9.jpg
http://i44.tinypic.com/ka4vuc.jpg
I tried different values for black, but coluld find average value so region is thresholded in both pictures
One of ranges is
inRange(src_HSV, Scalar(0, 0, 0), Scalar(180, 150, 50), src_HSV);
Another is
inRange(src_HSV, Scalar(100, 40, 140), Scalar(140, 160, 255), src_HSV);
I tried to search the Internet for values of black, but couldn't find anything suitable for this case, having different tones of black
Note that in HSV, black is defined as V=0, independently of H and S (in your case, you probably need to look for small values of V and S). I would ignore the H component.
inRange(src_HSV, Scalar(0, 0, 0), Scalar(179,50, 100), src_HSV);
for black and grey shades.
Anyways it is application specific.
Follow this link to get good insights on HSV:
Would this be an option (in RGB):
if ((red + green + blue) <= 64) {
// black
} else {
// not black
}
If not you could try HSL (hue, saturation, lightness) values and set black if lightness < 10% ...

Resources